Oct  9 05:00:21 np0005478303 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  9 05:00:21 np0005478303 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  9 05:00:21 np0005478303 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 05:00:21 np0005478303 kernel: BIOS-provided physical RAM map:
Oct  9 05:00:21 np0005478303 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  9 05:00:21 np0005478303 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  9 05:00:21 np0005478303 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  9 05:00:21 np0005478303 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Oct  9 05:00:21 np0005478303 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Oct  9 05:00:21 np0005478303 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Oct  9 05:00:21 np0005478303 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Oct  9 05:00:21 np0005478303 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  9 05:00:21 np0005478303 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  9 05:00:21 np0005478303 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Oct  9 05:00:21 np0005478303 kernel: NX (Execute Disable) protection: active
Oct  9 05:00:21 np0005478303 kernel: APIC: Static calls initialized
Oct  9 05:00:21 np0005478303 kernel: SMBIOS 2.8 present.
Oct  9 05:00:21 np0005478303 kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Oct  9 05:00:21 np0005478303 kernel: Hypervisor detected: KVM
Oct  9 05:00:21 np0005478303 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  9 05:00:21 np0005478303 kernel: kvm-clock: using sched offset of 3395367691 cycles
Oct  9 05:00:21 np0005478303 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  9 05:00:21 np0005478303 kernel: tsc: Detected 2445.406 MHz processor
Oct  9 05:00:21 np0005478303 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Oct  9 05:00:21 np0005478303 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  9 05:00:21 np0005478303 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  9 05:00:21 np0005478303 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Oct  9 05:00:21 np0005478303 kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Oct  9 05:00:21 np0005478303 kernel: Using GB pages for direct mapping
Oct  9 05:00:21 np0005478303 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  9 05:00:21 np0005478303 kernel: ACPI: Early table checksum verification disabled
Oct  9 05:00:21 np0005478303 kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Oct  9 05:00:21 np0005478303 kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:21 np0005478303 kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:21 np0005478303 kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:21 np0005478303 kernel: ACPI: FACS 0x000000007FFDFC80 000040
Oct  9 05:00:21 np0005478303 kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:21 np0005478303 kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:21 np0005478303 kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:21 np0005478303 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Oct  9 05:00:21 np0005478303 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Oct  9 05:00:21 np0005478303 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Oct  9 05:00:21 np0005478303 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Oct  9 05:00:21 np0005478303 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Oct  9 05:00:21 np0005478303 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Oct  9 05:00:21 np0005478303 kernel: No NUMA configuration found
Oct  9 05:00:21 np0005478303 kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Oct  9 05:00:21 np0005478303 kernel: NODE_DATA(0) allocated [mem 0x27ffd3000-0x27fffdfff]
Oct  9 05:00:21 np0005478303 kernel: crashkernel reserved: 0x000000006b000000 - 0x000000007b000000 (256 MB)
Oct  9 05:00:21 np0005478303 kernel: Zone ranges:
Oct  9 05:00:21 np0005478303 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  9 05:00:21 np0005478303 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  9 05:00:21 np0005478303 kernel:  Normal   [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 05:00:21 np0005478303 kernel:  Device   empty
Oct  9 05:00:21 np0005478303 kernel: Movable zone start for each node
Oct  9 05:00:21 np0005478303 kernel: Early memory node ranges
Oct  9 05:00:21 np0005478303 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  9 05:00:21 np0005478303 kernel:  node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Oct  9 05:00:21 np0005478303 kernel:  node   0: [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 05:00:21 np0005478303 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Oct  9 05:00:21 np0005478303 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  9 05:00:21 np0005478303 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  9 05:00:21 np0005478303 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  9 05:00:21 np0005478303 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  9 05:00:21 np0005478303 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  9 05:00:21 np0005478303 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  9 05:00:21 np0005478303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  9 05:00:21 np0005478303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  9 05:00:21 np0005478303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  9 05:00:21 np0005478303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  9 05:00:21 np0005478303 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  9 05:00:21 np0005478303 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  9 05:00:21 np0005478303 kernel: TSC deadline timer available
Oct  9 05:00:21 np0005478303 kernel: CPU topo: Max. logical packages:   4
Oct  9 05:00:21 np0005478303 kernel: CPU topo: Max. logical dies:       4
Oct  9 05:00:21 np0005478303 kernel: CPU topo: Max. dies per package:   1
Oct  9 05:00:21 np0005478303 kernel: CPU topo: Max. threads per core:   1
Oct  9 05:00:21 np0005478303 kernel: CPU topo: Num. cores per package:     1
Oct  9 05:00:21 np0005478303 kernel: CPU topo: Num. threads per package:   1
Oct  9 05:00:21 np0005478303 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Oct  9 05:00:21 np0005478303 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  9 05:00:21 np0005478303 kernel: kvm-guest: KVM setup pv remote TLB flush
Oct  9 05:00:21 np0005478303 kernel: kvm-guest: setup PV sched yield
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  9 05:00:21 np0005478303 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  9 05:00:21 np0005478303 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Oct  9 05:00:21 np0005478303 kernel: Booting paravirtualized kernel on KVM
Oct  9 05:00:21 np0005478303 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  9 05:00:21 np0005478303 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Oct  9 05:00:21 np0005478303 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Oct  9 05:00:21 np0005478303 kernel: kvm-guest: PV spinlocks enabled
Oct  9 05:00:21 np0005478303 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 05:00:21 np0005478303 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  9 05:00:21 np0005478303 kernel: random: crng init done
Oct  9 05:00:21 np0005478303 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: Fallback order for Node 0: 0 
Oct  9 05:00:21 np0005478303 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  9 05:00:21 np0005478303 kernel: Policy zone: Normal
Oct  9 05:00:21 np0005478303 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  9 05:00:21 np0005478303 kernel: software IO TLB: area num 4.
Oct  9 05:00:21 np0005478303 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Oct  9 05:00:21 np0005478303 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  9 05:00:21 np0005478303 kernel: ftrace: allocated 193 pages with 3 groups
Oct  9 05:00:21 np0005478303 kernel: Dynamic Preempt: voluntary
Oct  9 05:00:21 np0005478303 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  9 05:00:21 np0005478303 kernel: rcu: #011RCU event tracing is enabled.
Oct  9 05:00:21 np0005478303 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Oct  9 05:00:21 np0005478303 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  9 05:00:21 np0005478303 kernel: #011Rude variant of Tasks RCU enabled.
Oct  9 05:00:21 np0005478303 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  9 05:00:21 np0005478303 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  9 05:00:21 np0005478303 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Oct  9 05:00:21 np0005478303 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 05:00:21 np0005478303 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 05:00:21 np0005478303 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 05:00:21 np0005478303 kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Oct  9 05:00:21 np0005478303 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  9 05:00:21 np0005478303 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  9 05:00:21 np0005478303 kernel: Console: colour VGA+ 80x25
Oct  9 05:00:21 np0005478303 kernel: printk: console [ttyS0] enabled
Oct  9 05:00:21 np0005478303 kernel: ACPI: Core revision 20230331
Oct  9 05:00:21 np0005478303 kernel: APIC: Switch to symmetric I/O mode setup
Oct  9 05:00:21 np0005478303 kernel: x2apic enabled
Oct  9 05:00:21 np0005478303 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  9 05:00:21 np0005478303 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Oct  9 05:00:21 np0005478303 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Oct  9 05:00:21 np0005478303 kernel: kvm-guest: setup PV IPIs
Oct  9 05:00:21 np0005478303 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  9 05:00:21 np0005478303 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Oct  9 05:00:21 np0005478303 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  9 05:00:21 np0005478303 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  9 05:00:21 np0005478303 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  9 05:00:21 np0005478303 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  9 05:00:21 np0005478303 kernel: Spectre V2 : Mitigation: Retpolines
Oct  9 05:00:21 np0005478303 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  9 05:00:21 np0005478303 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Oct  9 05:00:21 np0005478303 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  9 05:00:21 np0005478303 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  9 05:00:21 np0005478303 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  9 05:00:21 np0005478303 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  9 05:00:21 np0005478303 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  9 05:00:21 np0005478303 kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Oct  9 05:00:21 np0005478303 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  9 05:00:21 np0005478303 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  9 05:00:21 np0005478303 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  9 05:00:21 np0005478303 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Oct  9 05:00:21 np0005478303 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  9 05:00:21 np0005478303 kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Oct  9 05:00:21 np0005478303 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Oct  9 05:00:21 np0005478303 kernel: Freeing SMP alternatives memory: 40K
Oct  9 05:00:21 np0005478303 kernel: pid_max: default: 32768 minimum: 301
Oct  9 05:00:21 np0005478303 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  9 05:00:21 np0005478303 kernel: landlock: Up and running.
Oct  9 05:00:21 np0005478303 kernel: Yama: becoming mindful.
Oct  9 05:00:21 np0005478303 kernel: SELinux:  Initializing.
Oct  9 05:00:21 np0005478303 kernel: LSM support for eBPF active
Oct  9 05:00:21 np0005478303 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Oct  9 05:00:21 np0005478303 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  9 05:00:21 np0005478303 kernel: ... version:                0
Oct  9 05:00:21 np0005478303 kernel: ... bit width:              48
Oct  9 05:00:21 np0005478303 kernel: ... generic registers:      6
Oct  9 05:00:21 np0005478303 kernel: ... value mask:             0000ffffffffffff
Oct  9 05:00:21 np0005478303 kernel: ... max period:             00007fffffffffff
Oct  9 05:00:21 np0005478303 kernel: ... fixed-purpose events:   0
Oct  9 05:00:21 np0005478303 kernel: ... event mask:             000000000000003f
Oct  9 05:00:21 np0005478303 kernel: signal: max sigframe size: 3376
Oct  9 05:00:21 np0005478303 kernel: rcu: Hierarchical SRCU implementation.
Oct  9 05:00:21 np0005478303 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  9 05:00:21 np0005478303 kernel: smp: Bringing up secondary CPUs ...
Oct  9 05:00:21 np0005478303 kernel: smpboot: x86: Booting SMP configuration:
Oct  9 05:00:21 np0005478303 kernel: .... node  #0, CPUs:      #1 #2 #3
Oct  9 05:00:21 np0005478303 kernel: smp: Brought up 1 node, 4 CPUs
Oct  9 05:00:21 np0005478303 kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Oct  9 05:00:21 np0005478303 kernel: node 0 deferred pages initialised in 17ms
Oct  9 05:00:21 np0005478303 kernel: Memory: 7767884K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 615464K reserved, 0K cma-reserved)
Oct  9 05:00:21 np0005478303 kernel: devtmpfs: initialized
Oct  9 05:00:21 np0005478303 kernel: x86/mm: Memory block size: 128MB
Oct  9 05:00:21 np0005478303 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  9 05:00:21 np0005478303 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: pinctrl core: initialized pinctrl subsystem
Oct  9 05:00:21 np0005478303 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  9 05:00:21 np0005478303 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  9 05:00:21 np0005478303 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  9 05:00:21 np0005478303 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  9 05:00:21 np0005478303 kernel: audit: initializing netlink subsys (disabled)
Oct  9 05:00:21 np0005478303 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  9 05:00:21 np0005478303 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  9 05:00:21 np0005478303 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  9 05:00:21 np0005478303 kernel: audit: type=2000 audit(1760000420.510:1): state=initialized audit_enabled=0 res=1
Oct  9 05:00:21 np0005478303 kernel: cpuidle: using governor menu
Oct  9 05:00:21 np0005478303 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  9 05:00:21 np0005478303 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Oct  9 05:00:21 np0005478303 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Oct  9 05:00:21 np0005478303 kernel: PCI: Using configuration type 1 for base access
Oct  9 05:00:21 np0005478303 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  9 05:00:21 np0005478303 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  9 05:00:21 np0005478303 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  9 05:00:21 np0005478303 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  9 05:00:21 np0005478303 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  9 05:00:21 np0005478303 kernel: Demotion targets for Node 0: null
Oct  9 05:00:21 np0005478303 kernel: cryptd: max_cpu_qlen set to 1000
Oct  9 05:00:21 np0005478303 kernel: ACPI: Added _OSI(Module Device)
Oct  9 05:00:21 np0005478303 kernel: ACPI: Added _OSI(Processor Device)
Oct  9 05:00:21 np0005478303 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  9 05:00:21 np0005478303 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  9 05:00:21 np0005478303 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  9 05:00:21 np0005478303 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  9 05:00:21 np0005478303 kernel: ACPI: Interpreter enabled
Oct  9 05:00:21 np0005478303 kernel: ACPI: PM: (supports S0 S5)
Oct  9 05:00:21 np0005478303 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  9 05:00:21 np0005478303 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  9 05:00:21 np0005478303 kernel: PCI: Using E820 reservations for host bridge windows
Oct  9 05:00:21 np0005478303 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  9 05:00:21 np0005478303 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  9 05:00:21 np0005478303 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Oct  9 05:00:21 np0005478303 kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Oct  9 05:00:21 np0005478303 kernel: PCI host bridge to bus 0000:00
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:02: extended config space not accessible
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [1] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [2] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [3] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [4] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [5] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [6] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [7] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [8] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [9] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [10] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [11] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [12] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [13] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [14] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [15] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [16] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [17] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [18] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [19] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [20] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [21] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [22] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [23] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [24] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [25] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [26] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [27] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [28] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [29] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [30] registered
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [31] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  9 05:00:21 np0005478303 kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-2] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 05:00:21 np0005478303 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-3] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Oct  9 05:00:21 np0005478303 kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-4] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 05:00:21 np0005478303 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-5] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 05:00:21 np0005478303 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-6] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-7] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-8] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-9] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-10] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-11] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-12] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-13] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-14] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-15] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-16] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 05:00:21 np0005478303 kernel: acpiphp: Slot [0-17] registered
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Oct  9 05:00:21 np0005478303 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Oct  9 05:00:21 np0005478303 kernel: iommu: Default domain type: Translated
Oct  9 05:00:21 np0005478303 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  9 05:00:21 np0005478303 kernel: SCSI subsystem initialized
Oct  9 05:00:21 np0005478303 kernel: ACPI: bus type USB registered
Oct  9 05:00:21 np0005478303 kernel: usbcore: registered new interface driver usbfs
Oct  9 05:00:21 np0005478303 kernel: usbcore: registered new interface driver hub
Oct  9 05:00:21 np0005478303 kernel: usbcore: registered new device driver usb
Oct  9 05:00:21 np0005478303 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  9 05:00:21 np0005478303 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  9 05:00:21 np0005478303 kernel: PTP clock support registered
Oct  9 05:00:21 np0005478303 kernel: EDAC MC: Ver: 3.0.0
Oct  9 05:00:21 np0005478303 kernel: NetLabel: Initializing
Oct  9 05:00:21 np0005478303 kernel: NetLabel:  domain hash size = 128
Oct  9 05:00:21 np0005478303 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  9 05:00:21 np0005478303 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  9 05:00:21 np0005478303 kernel: PCI: Using ACPI for IRQ routing
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  9 05:00:21 np0005478303 kernel: vgaarb: loaded
Oct  9 05:00:21 np0005478303 kernel: clocksource: Switched to clocksource kvm-clock
Oct  9 05:00:21 np0005478303 kernel: VFS: Disk quotas dquot_6.6.0
Oct  9 05:00:21 np0005478303 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  9 05:00:21 np0005478303 kernel: pnp: PnP ACPI init
Oct  9 05:00:21 np0005478303 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Oct  9 05:00:21 np0005478303 kernel: pnp: PnP ACPI: found 5 devices
Oct  9 05:00:21 np0005478303 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  9 05:00:21 np0005478303 kernel: NET: Registered PF_INET protocol family
Oct  9 05:00:21 np0005478303 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  9 05:00:21 np0005478303 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 05:00:21 np0005478303 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  9 05:00:21 np0005478303 kernel: NET: Registered PF_XDP protocol family
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 05:00:21 np0005478303 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Oct  9 05:00:21 np0005478303 kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 05:00:21 np0005478303 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Oct  9 05:00:21 np0005478303 kernel: PCI: CLS 0 bytes, default 64
Oct  9 05:00:21 np0005478303 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  9 05:00:21 np0005478303 kernel: software IO TLB: mapped [mem 0x0000000067000000-0x000000006b000000] (64MB)
Oct  9 05:00:21 np0005478303 kernel: ACPI: bus type thunderbolt registered
Oct  9 05:00:21 np0005478303 kernel: Trying to unpack rootfs image as initramfs...
Oct  9 05:00:21 np0005478303 kernel: Initialise system trusted keyrings
Oct  9 05:00:21 np0005478303 kernel: Key type blacklist registered
Oct  9 05:00:21 np0005478303 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  9 05:00:21 np0005478303 kernel: zbud: loaded
Oct  9 05:00:21 np0005478303 kernel: integrity: Platform Keyring initialized
Oct  9 05:00:21 np0005478303 kernel: integrity: Machine keyring initialized
Oct  9 05:00:21 np0005478303 kernel: Freeing initrd memory: 86104K
Oct  9 05:00:21 np0005478303 kernel: NET: Registered PF_ALG protocol family
Oct  9 05:00:21 np0005478303 kernel: xor: automatically using best checksumming function   avx       
Oct  9 05:00:21 np0005478303 kernel: Key type asymmetric registered
Oct  9 05:00:21 np0005478303 kernel: Asymmetric key parser 'x509' registered
Oct  9 05:00:21 np0005478303 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  9 05:00:21 np0005478303 kernel: io scheduler mq-deadline registered
Oct  9 05:00:21 np0005478303 kernel: io scheduler kyber registered
Oct  9 05:00:21 np0005478303 kernel: io scheduler bfq registered
Oct  9 05:00:21 np0005478303 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Oct  9 05:00:21 np0005478303 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Oct  9 05:00:21 np0005478303 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Oct  9 05:00:21 np0005478303 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Oct  9 05:00:21 np0005478303 kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Oct  9 05:00:21 np0005478303 kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Oct  9 05:00:21 np0005478303 kernel: shpchp 0000:01:00.0: Slot initialization failed
Oct  9 05:00:21 np0005478303 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  9 05:00:21 np0005478303 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  9 05:00:21 np0005478303 kernel: ACPI: button: Power Button [PWRF]
Oct  9 05:00:21 np0005478303 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Oct  9 05:00:21 np0005478303 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  9 05:00:21 np0005478303 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  9 05:00:21 np0005478303 kernel: Non-volatile memory driver v1.3
Oct  9 05:00:21 np0005478303 kernel: rdac: device handler registered
Oct  9 05:00:21 np0005478303 kernel: hp_sw: device handler registered
Oct  9 05:00:21 np0005478303 kernel: emc: device handler registered
Oct  9 05:00:21 np0005478303 kernel: alua: device handler registered
Oct  9 05:00:21 np0005478303 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Oct  9 05:00:21 np0005478303 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Oct  9 05:00:21 np0005478303 kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Oct  9 05:00:21 np0005478303 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Oct  9 05:00:21 np0005478303 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  9 05:00:21 np0005478303 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  9 05:00:21 np0005478303 kernel: usb usb1: Product: UHCI Host Controller
Oct  9 05:00:21 np0005478303 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  9 05:00:21 np0005478303 kernel: usb usb1: SerialNumber: 0000:02:01.0
Oct  9 05:00:21 np0005478303 kernel: hub 1-0:1.0: USB hub found
Oct  9 05:00:21 np0005478303 kernel: hub 1-0:1.0: 2 ports detected
Oct  9 05:00:21 np0005478303 kernel: usbcore: registered new interface driver usbserial_generic
Oct  9 05:00:21 np0005478303 kernel: usbserial: USB Serial support registered for generic
Oct  9 05:00:21 np0005478303 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  9 05:00:21 np0005478303 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  9 05:00:21 np0005478303 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  9 05:00:21 np0005478303 kernel: mousedev: PS/2 mouse device common for all mice
Oct  9 05:00:21 np0005478303 kernel: rtc_cmos 00:03: RTC can wake from S4
Oct  9 05:00:21 np0005478303 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  9 05:00:21 np0005478303 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  9 05:00:21 np0005478303 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  9 05:00:21 np0005478303 kernel: rtc_cmos 00:03: registered as rtc0
Oct  9 05:00:21 np0005478303 kernel: rtc_cmos 00:03: setting system clock to 2025-10-09T09:00:21 UTC (1760000421)
Oct  9 05:00:21 np0005478303 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Oct  9 05:00:21 np0005478303 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  9 05:00:21 np0005478303 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  9 05:00:21 np0005478303 kernel: usbcore: registered new interface driver usbhid
Oct  9 05:00:21 np0005478303 kernel: usbhid: USB HID core driver
Oct  9 05:00:21 np0005478303 kernel: drop_monitor: Initializing network drop monitor service
Oct  9 05:00:21 np0005478303 kernel: Initializing XFRM netlink socket
Oct  9 05:00:21 np0005478303 kernel: NET: Registered PF_INET6 protocol family
Oct  9 05:00:21 np0005478303 kernel: Segment Routing with IPv6
Oct  9 05:00:21 np0005478303 kernel: NET: Registered PF_PACKET protocol family
Oct  9 05:00:21 np0005478303 kernel: mpls_gso: MPLS GSO support
Oct  9 05:00:21 np0005478303 kernel: IPI shorthand broadcast: enabled
Oct  9 05:00:21 np0005478303 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  9 05:00:21 np0005478303 kernel: AES CTR mode by8 optimization enabled
Oct  9 05:00:21 np0005478303 kernel: sched_clock: Marking stable (1161002085, 141733205)->(1407960158, -105224868)
Oct  9 05:00:21 np0005478303 kernel: registered taskstats version 1
Oct  9 05:00:21 np0005478303 kernel: Loading compiled-in X.509 certificates
Oct  9 05:00:21 np0005478303 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 05:00:21 np0005478303 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  9 05:00:21 np0005478303 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  9 05:00:21 np0005478303 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  9 05:00:21 np0005478303 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  9 05:00:21 np0005478303 kernel: Demotion targets for Node 0: null
Oct  9 05:00:21 np0005478303 kernel: page_owner is disabled
Oct  9 05:00:21 np0005478303 kernel: Key type .fscrypt registered
Oct  9 05:00:21 np0005478303 kernel: Key type fscrypt-provisioning registered
Oct  9 05:00:21 np0005478303 kernel: Key type big_key registered
Oct  9 05:00:21 np0005478303 kernel: Key type encrypted registered
Oct  9 05:00:21 np0005478303 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  9 05:00:21 np0005478303 kernel: Loading compiled-in module X.509 certificates
Oct  9 05:00:21 np0005478303 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 05:00:21 np0005478303 kernel: ima: Allocated hash algorithm: sha256
Oct  9 05:00:21 np0005478303 kernel: ima: No architecture policies found
Oct  9 05:00:21 np0005478303 kernel: evm: Initialising EVM extended attributes:
Oct  9 05:00:21 np0005478303 kernel: evm: security.selinux
Oct  9 05:00:21 np0005478303 kernel: evm: security.SMACK64 (disabled)
Oct  9 05:00:21 np0005478303 kernel: evm: security.SMACK64EXEC (disabled)
Oct  9 05:00:21 np0005478303 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  9 05:00:21 np0005478303 kernel: evm: security.SMACK64MMAP (disabled)
Oct  9 05:00:21 np0005478303 kernel: evm: security.apparmor (disabled)
Oct  9 05:00:21 np0005478303 kernel: evm: security.ima
Oct  9 05:00:21 np0005478303 kernel: evm: security.capability
Oct  9 05:00:21 np0005478303 kernel: evm: HMAC attrs: 0x1
Oct  9 05:00:21 np0005478303 kernel: Running certificate verification RSA selftest
Oct  9 05:00:21 np0005478303 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  9 05:00:21 np0005478303 kernel: Running certificate verification ECDSA selftest
Oct  9 05:00:21 np0005478303 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  9 05:00:21 np0005478303 kernel: clk: Disabling unused clocks
Oct  9 05:00:21 np0005478303 kernel: Freeing unused decrypted memory: 2028K
Oct  9 05:00:21 np0005478303 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  9 05:00:21 np0005478303 kernel: Write protecting the kernel read-only data: 30720k
Oct  9 05:00:21 np0005478303 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  9 05:00:21 np0005478303 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  9 05:00:21 np0005478303 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  9 05:00:21 np0005478303 kernel: Run /init as init process
Oct  9 05:00:21 np0005478303 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 05:00:21 np0005478303 systemd: Detected virtualization kvm.
Oct  9 05:00:21 np0005478303 systemd: Detected architecture x86-64.
Oct  9 05:00:21 np0005478303 systemd: Running in initrd.
Oct  9 05:00:21 np0005478303 systemd: No hostname configured, using default hostname.
Oct  9 05:00:21 np0005478303 systemd: Hostname set to <localhost>.
Oct  9 05:00:21 np0005478303 systemd: Initializing machine ID from VM UUID.
Oct  9 05:00:21 np0005478303 systemd: Queued start job for default target Initrd Default Target.
Oct  9 05:00:21 np0005478303 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 05:00:21 np0005478303 systemd: Reached target Local Encrypted Volumes.
Oct  9 05:00:21 np0005478303 systemd: Reached target Initrd /usr File System.
Oct  9 05:00:21 np0005478303 systemd: Reached target Local File Systems.
Oct  9 05:00:21 np0005478303 systemd: Reached target Path Units.
Oct  9 05:00:21 np0005478303 systemd: Reached target Slice Units.
Oct  9 05:00:21 np0005478303 systemd: Reached target Swaps.
Oct  9 05:00:21 np0005478303 systemd: Reached target Timer Units.
Oct  9 05:00:21 np0005478303 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  9 05:00:21 np0005478303 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  9 05:00:21 np0005478303 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  9 05:00:21 np0005478303 kernel: usb 1-1: Manufacturer: QEMU
Oct  9 05:00:21 np0005478303 kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Oct  9 05:00:21 np0005478303 systemd: Listening on D-Bus System Message Bus Socket.
Oct  9 05:00:21 np0005478303 systemd: Listening on Journal Socket (/dev/log).
Oct  9 05:00:21 np0005478303 systemd: Listening on Journal Socket.
Oct  9 05:00:21 np0005478303 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  9 05:00:21 np0005478303 systemd: Listening on udev Control Socket.
Oct  9 05:00:21 np0005478303 systemd: Listening on udev Kernel Socket.
Oct  9 05:00:21 np0005478303 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Oct  9 05:00:21 np0005478303 systemd: Reached target Socket Units.
Oct  9 05:00:21 np0005478303 systemd: Starting Create List of Static Device Nodes...
Oct  9 05:00:21 np0005478303 systemd: Starting Journal Service...
Oct  9 05:00:21 np0005478303 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 05:00:21 np0005478303 systemd: Starting Apply Kernel Variables...
Oct  9 05:00:21 np0005478303 systemd: Starting Create System Users...
Oct  9 05:00:21 np0005478303 systemd: Starting Setup Virtual Console...
Oct  9 05:00:21 np0005478303 systemd: Finished Create List of Static Device Nodes.
Oct  9 05:00:21 np0005478303 systemd: Finished Apply Kernel Variables.
Oct  9 05:00:21 np0005478303 systemd: Finished Create System Users.
Oct  9 05:00:21 np0005478303 systemd-journald[282]: Journal started
Oct  9 05:00:21 np0005478303 systemd-journald[282]: Runtime Journal (/run/log/journal/99ca1aa4a8fe49f8801977dd20980206) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:00:21 np0005478303 systemd-sysusers[285]: Creating group 'users' with GID 100.
Oct  9 05:00:21 np0005478303 systemd-sysusers[285]: Creating group 'dbus' with GID 81.
Oct  9 05:00:21 np0005478303 systemd-sysusers[285]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  9 05:00:21 np0005478303 systemd: Started Journal Service.
Oct  9 05:00:22 np0005478303 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 05:00:22 np0005478303 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 05:00:22 np0005478303 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 05:00:22 np0005478303 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 05:00:22 np0005478303 systemd[1]: Finished Setup Virtual Console.
Oct  9 05:00:22 np0005478303 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  9 05:00:22 np0005478303 systemd[1]: Starting dracut cmdline hook...
Oct  9 05:00:22 np0005478303 dracut-cmdline[299]: dracut-9 dracut-057-102.git20250818.el9
Oct  9 05:00:22 np0005478303 dracut-cmdline[299]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 05:00:22 np0005478303 systemd[1]: Finished dracut cmdline hook.
Oct  9 05:00:22 np0005478303 systemd[1]: Starting dracut pre-udev hook...
Oct  9 05:00:22 np0005478303 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  9 05:00:22 np0005478303 kernel: device-mapper: uevent: version 1.0.3
Oct  9 05:00:22 np0005478303 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  9 05:00:22 np0005478303 kernel: RPC: Registered named UNIX socket transport module.
Oct  9 05:00:22 np0005478303 kernel: RPC: Registered udp transport module.
Oct  9 05:00:22 np0005478303 kernel: RPC: Registered tcp transport module.
Oct  9 05:00:22 np0005478303 kernel: RPC: Registered tcp-with-tls transport module.
Oct  9 05:00:22 np0005478303 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  9 05:00:22 np0005478303 rpc.statd[414]: Version 2.5.4 starting
Oct  9 05:00:22 np0005478303 rpc.statd[414]: Initializing NSM state
Oct  9 05:00:22 np0005478303 rpc.idmapd[419]: Setting log level to 0
Oct  9 05:00:22 np0005478303 systemd[1]: Finished dracut pre-udev hook.
Oct  9 05:00:22 np0005478303 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 05:00:22 np0005478303 systemd-udevd[432]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 05:00:22 np0005478303 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 05:00:22 np0005478303 systemd[1]: Starting dracut pre-trigger hook...
Oct  9 05:00:22 np0005478303 systemd[1]: Finished dracut pre-trigger hook.
Oct  9 05:00:22 np0005478303 systemd[1]: Starting Coldplug All udev Devices...
Oct  9 05:00:22 np0005478303 systemd[1]: Created slice Slice /system/modprobe.
Oct  9 05:00:22 np0005478303 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 05:00:22 np0005478303 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 05:00:22 np0005478303 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 05:00:22 np0005478303 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 05:00:22 np0005478303 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 05:00:22 np0005478303 systemd[1]: Reached target Network.
Oct  9 05:00:22 np0005478303 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 05:00:22 np0005478303 systemd[1]: Starting dracut initqueue hook...
Oct  9 05:00:22 np0005478303 kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Oct  9 05:00:22 np0005478303 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  9 05:00:22 np0005478303 kernel: vda: vda1
Oct  9 05:00:22 np0005478303 systemd-udevd[449]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:00:22 np0005478303 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 05:00:22 np0005478303 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Oct  9 05:00:22 np0005478303 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Oct  9 05:00:22 np0005478303 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Oct  9 05:00:22 np0005478303 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Oct  9 05:00:22 np0005478303 kernel: scsi host0: ahci
Oct  9 05:00:22 np0005478303 kernel: scsi host1: ahci
Oct  9 05:00:22 np0005478303 kernel: scsi host2: ahci
Oct  9 05:00:22 np0005478303 kernel: scsi host3: ahci
Oct  9 05:00:22 np0005478303 kernel: scsi host4: ahci
Oct  9 05:00:22 np0005478303 kernel: scsi host5: ahci
Oct  9 05:00:22 np0005478303 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Oct  9 05:00:22 np0005478303 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Oct  9 05:00:22 np0005478303 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Oct  9 05:00:22 np0005478303 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Oct  9 05:00:22 np0005478303 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Oct  9 05:00:22 np0005478303 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Oct  9 05:00:22 np0005478303 systemd[1]: Reached target Initrd Root Device.
Oct  9 05:00:22 np0005478303 systemd[1]: Mounting Kernel Configuration File System...
Oct  9 05:00:22 np0005478303 systemd[1]: Mounted Kernel Configuration File System.
Oct  9 05:00:22 np0005478303 systemd[1]: Reached target System Initialization.
Oct  9 05:00:22 np0005478303 systemd[1]: Reached target Basic System.
Oct  9 05:00:22 np0005478303 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Oct  9 05:00:22 np0005478303 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  9 05:00:22 np0005478303 kernel: ata1.00: applying bridge limits
Oct  9 05:00:22 np0005478303 kernel: ata1.00: configured for UDMA/100
Oct  9 05:00:22 np0005478303 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  9 05:00:22 np0005478303 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:22 np0005478303 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:22 np0005478303 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:22 np0005478303 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:22 np0005478303 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:22 np0005478303 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  9 05:00:22 np0005478303 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  9 05:00:22 np0005478303 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  9 05:00:23 np0005478303 systemd[1]: Finished dracut initqueue hook.
Oct  9 05:00:23 np0005478303 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 05:00:23 np0005478303 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  9 05:00:23 np0005478303 systemd[1]: Reached target Remote File Systems.
Oct  9 05:00:23 np0005478303 systemd[1]: Starting dracut pre-mount hook...
Oct  9 05:00:23 np0005478303 systemd[1]: Finished dracut pre-mount hook.
Oct  9 05:00:23 np0005478303 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  9 05:00:23 np0005478303 systemd-fsck[526]: /usr/sbin/fsck.xfs: XFS file system.
Oct  9 05:00:23 np0005478303 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 05:00:23 np0005478303 systemd[1]: Mounting /sysroot...
Oct  9 05:00:23 np0005478303 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  9 05:00:23 np0005478303 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  9 05:00:23 np0005478303 kernel: XFS (vda1): Ending clean mount
Oct  9 05:00:23 np0005478303 systemd[1]: Mounted /sysroot.
Oct  9 05:00:23 np0005478303 systemd[1]: Reached target Initrd Root File System.
Oct  9 05:00:23 np0005478303 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  9 05:00:23 np0005478303 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  9 05:00:23 np0005478303 systemd[1]: Reached target Initrd File Systems.
Oct  9 05:00:23 np0005478303 systemd[1]: Reached target Initrd Default Target.
Oct  9 05:00:23 np0005478303 systemd[1]: Starting dracut mount hook...
Oct  9 05:00:23 np0005478303 systemd[1]: Finished dracut mount hook.
Oct  9 05:00:23 np0005478303 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  9 05:00:23 np0005478303 rpc.idmapd[419]: exiting on signal 15
Oct  9 05:00:23 np0005478303 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  9 05:00:23 np0005478303 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Network.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Timer Units.
Oct  9 05:00:23 np0005478303 systemd[1]: dbus.socket: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  9 05:00:23 np0005478303 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Initrd Default Target.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Basic System.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Initrd Root Device.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Initrd /usr File System.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Path Units.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Remote File Systems.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Slice Units.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Socket Units.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target System Initialization.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Local File Systems.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Swaps.
Oct  9 05:00:23 np0005478303 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped dracut mount hook.
Oct  9 05:00:23 np0005478303 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped dracut pre-mount hook.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  9 05:00:23 np0005478303 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  9 05:00:23 np0005478303 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped dracut initqueue hook.
Oct  9 05:00:23 np0005478303 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 05:00:23 np0005478303 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  9 05:00:23 np0005478303 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped Coldplug All udev Devices.
Oct  9 05:00:23 np0005478303 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped dracut pre-trigger hook.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  9 05:00:23 np0005478303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped Setup Virtual Console.
Oct  9 05:00:23 np0005478303 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  9 05:00:23 np0005478303 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  9 05:00:23 np0005478303 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Closed udev Control Socket.
Oct  9 05:00:23 np0005478303 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Closed udev Kernel Socket.
Oct  9 05:00:23 np0005478303 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped dracut pre-udev hook.
Oct  9 05:00:23 np0005478303 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped dracut cmdline hook.
Oct  9 05:00:23 np0005478303 systemd[1]: Starting Cleanup udev Database...
Oct  9 05:00:23 np0005478303 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  9 05:00:23 np0005478303 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  9 05:00:23 np0005478303 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Stopped Create System Users.
Oct  9 05:00:23 np0005478303 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  9 05:00:23 np0005478303 systemd[1]: Finished Cleanup udev Database.
Oct  9 05:00:23 np0005478303 systemd[1]: Reached target Switch Root.
Oct  9 05:00:23 np0005478303 systemd[1]: Starting Switch Root...
Oct  9 05:00:23 np0005478303 systemd[1]: Switching root.
Oct  9 05:00:23 np0005478303 systemd-journald[282]: Journal stopped
Oct  9 05:00:24 np0005478303 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  9 05:00:24 np0005478303 kernel: audit: type=1404 audit(1760000423.782:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  9 05:00:24 np0005478303 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:00:24 np0005478303 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:00:24 np0005478303 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:00:24 np0005478303 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:00:24 np0005478303 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:00:24 np0005478303 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:00:24 np0005478303 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:00:24 np0005478303 kernel: audit: type=1403 audit(1760000423.891:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  9 05:00:24 np0005478303 systemd: Successfully loaded SELinux policy in 110.782ms.
Oct  9 05:00:24 np0005478303 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.685ms.
Oct  9 05:00:24 np0005478303 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 05:00:24 np0005478303 systemd: Detected virtualization kvm.
Oct  9 05:00:24 np0005478303 systemd: Detected architecture x86-64.
Oct  9 05:00:24 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:00:24 np0005478303 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  9 05:00:24 np0005478303 systemd: Stopped Switch Root.
Oct  9 05:00:24 np0005478303 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  9 05:00:24 np0005478303 systemd: Created slice Slice /system/getty.
Oct  9 05:00:24 np0005478303 systemd: Created slice Slice /system/serial-getty.
Oct  9 05:00:24 np0005478303 systemd: Created slice Slice /system/sshd-keygen.
Oct  9 05:00:24 np0005478303 systemd: Created slice User and Session Slice.
Oct  9 05:00:24 np0005478303 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 05:00:24 np0005478303 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  9 05:00:24 np0005478303 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  9 05:00:24 np0005478303 systemd: Reached target Local Encrypted Volumes.
Oct  9 05:00:24 np0005478303 systemd: Stopped target Switch Root.
Oct  9 05:00:24 np0005478303 systemd: Stopped target Initrd File Systems.
Oct  9 05:00:24 np0005478303 systemd: Stopped target Initrd Root File System.
Oct  9 05:00:24 np0005478303 systemd: Reached target Local Integrity Protected Volumes.
Oct  9 05:00:24 np0005478303 systemd: Reached target Path Units.
Oct  9 05:00:24 np0005478303 systemd: Reached target rpc_pipefs.target.
Oct  9 05:00:24 np0005478303 systemd: Reached target Slice Units.
Oct  9 05:00:24 np0005478303 systemd: Reached target Swaps.
Oct  9 05:00:24 np0005478303 systemd: Reached target Local Verity Protected Volumes.
Oct  9 05:00:24 np0005478303 systemd: Listening on RPCbind Server Activation Socket.
Oct  9 05:00:24 np0005478303 systemd: Reached target RPC Port Mapper.
Oct  9 05:00:24 np0005478303 systemd: Listening on Process Core Dump Socket.
Oct  9 05:00:24 np0005478303 systemd: Listening on initctl Compatibility Named Pipe.
Oct  9 05:00:24 np0005478303 systemd: Listening on udev Control Socket.
Oct  9 05:00:24 np0005478303 systemd: Listening on udev Kernel Socket.
Oct  9 05:00:24 np0005478303 systemd: Mounting Huge Pages File System...
Oct  9 05:00:24 np0005478303 systemd: Mounting POSIX Message Queue File System...
Oct  9 05:00:24 np0005478303 systemd: Mounting Kernel Debug File System...
Oct  9 05:00:24 np0005478303 systemd: Mounting Kernel Trace File System...
Oct  9 05:00:24 np0005478303 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 05:00:24 np0005478303 systemd: Starting Create List of Static Device Nodes...
Oct  9 05:00:24 np0005478303 systemd: Starting Load Kernel Module configfs...
Oct  9 05:00:24 np0005478303 systemd: Starting Load Kernel Module drm...
Oct  9 05:00:24 np0005478303 systemd: Starting Load Kernel Module efi_pstore...
Oct  9 05:00:24 np0005478303 systemd: Starting Load Kernel Module fuse...
Oct  9 05:00:24 np0005478303 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  9 05:00:24 np0005478303 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  9 05:00:24 np0005478303 systemd: Stopped File System Check on Root Device.
Oct  9 05:00:24 np0005478303 systemd: Stopped Journal Service.
Oct  9 05:00:24 np0005478303 kernel: fuse: init (API version 7.37)
Oct  9 05:00:24 np0005478303 systemd: Starting Journal Service...
Oct  9 05:00:24 np0005478303 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 05:00:24 np0005478303 systemd: Starting Generate network units from Kernel command line...
Oct  9 05:00:24 np0005478303 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 05:00:24 np0005478303 systemd: Starting Remount Root and Kernel File Systems...
Oct  9 05:00:24 np0005478303 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  9 05:00:24 np0005478303 systemd: Starting Apply Kernel Variables...
Oct  9 05:00:24 np0005478303 systemd: Starting Coldplug All udev Devices...
Oct  9 05:00:24 np0005478303 systemd-journald[647]: Journal started
Oct  9 05:00:24 np0005478303 systemd-journald[647]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:00:24 np0005478303 systemd[1]: Queued start job for default target Multi-User System.
Oct  9 05:00:24 np0005478303 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  9 05:00:24 np0005478303 systemd: Mounted Huge Pages File System.
Oct  9 05:00:24 np0005478303 systemd: Started Journal Service.
Oct  9 05:00:24 np0005478303 systemd[1]: Mounted POSIX Message Queue File System.
Oct  9 05:00:24 np0005478303 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  9 05:00:24 np0005478303 systemd[1]: Mounted Kernel Debug File System.
Oct  9 05:00:24 np0005478303 systemd[1]: Mounted Kernel Trace File System.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Create List of Static Device Nodes.
Oct  9 05:00:24 np0005478303 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 05:00:24 np0005478303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  9 05:00:24 np0005478303 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Load Kernel Module fuse.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Generate network units from Kernel command line.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Apply Kernel Variables.
Oct  9 05:00:24 np0005478303 systemd[1]: Mounting FUSE Control File System...
Oct  9 05:00:24 np0005478303 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Rebuild Hardware Database...
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  9 05:00:24 np0005478303 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  9 05:00:24 np0005478303 kernel: ACPI: bus type drm_connector registered
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Load/Save OS Random Seed...
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Create System Users...
Oct  9 05:00:24 np0005478303 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Load Kernel Module drm.
Oct  9 05:00:24 np0005478303 systemd[1]: Mounted FUSE Control File System.
Oct  9 05:00:24 np0005478303 systemd-journald[647]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:00:24 np0005478303 systemd-journald[647]: Received client request to flush runtime journal.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Load/Save OS Random Seed.
Oct  9 05:00:24 np0005478303 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Create System Users.
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 05:00:24 np0005478303 systemd[1]: Reached target Preparation for Local File Systems.
Oct  9 05:00:24 np0005478303 systemd[1]: Reached target Local File Systems.
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  9 05:00:24 np0005478303 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  9 05:00:24 np0005478303 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  9 05:00:24 np0005478303 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Automatic Boot Loader Update...
Oct  9 05:00:24 np0005478303 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 05:00:24 np0005478303 bootctl[665]: Couldn't find EFI system partition, skipping.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Automatic Boot Loader Update.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Security Auditing Service...
Oct  9 05:00:24 np0005478303 systemd[1]: Starting RPC Bind...
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Rebuild Journal Catalog...
Oct  9 05:00:24 np0005478303 auditd[672]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  9 05:00:24 np0005478303 auditd[672]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  9 05:00:24 np0005478303 systemd[1]: Started RPC Bind.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Rebuild Journal Catalog.
Oct  9 05:00:24 np0005478303 augenrules[677]: /sbin/augenrules: No change
Oct  9 05:00:24 np0005478303 augenrules[692]: No rules
Oct  9 05:00:24 np0005478303 augenrules[692]: enabled 1
Oct  9 05:00:24 np0005478303 augenrules[692]: failure 1
Oct  9 05:00:24 np0005478303 augenrules[692]: pid 672
Oct  9 05:00:24 np0005478303 augenrules[692]: rate_limit 0
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog_limit 8192
Oct  9 05:00:24 np0005478303 augenrules[692]: lost 0
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog 2
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog_wait_time 60000
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog_wait_time_actual 0
Oct  9 05:00:24 np0005478303 augenrules[692]: enabled 1
Oct  9 05:00:24 np0005478303 augenrules[692]: failure 1
Oct  9 05:00:24 np0005478303 augenrules[692]: pid 672
Oct  9 05:00:24 np0005478303 augenrules[692]: rate_limit 0
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog_limit 8192
Oct  9 05:00:24 np0005478303 augenrules[692]: lost 0
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog 1
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog_wait_time 60000
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog_wait_time_actual 0
Oct  9 05:00:24 np0005478303 augenrules[692]: enabled 1
Oct  9 05:00:24 np0005478303 augenrules[692]: failure 1
Oct  9 05:00:24 np0005478303 augenrules[692]: pid 672
Oct  9 05:00:24 np0005478303 augenrules[692]: rate_limit 0
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog_limit 8192
Oct  9 05:00:24 np0005478303 augenrules[692]: lost 0
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog 0
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog_wait_time 60000
Oct  9 05:00:24 np0005478303 augenrules[692]: backlog_wait_time_actual 0
Oct  9 05:00:24 np0005478303 systemd[1]: Started Security Auditing Service.
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Rebuild Hardware Database.
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Update is Completed...
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Update is Completed.
Oct  9 05:00:24 np0005478303 systemd-udevd[700]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 05:00:24 np0005478303 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 05:00:24 np0005478303 systemd[1]: Reached target System Initialization.
Oct  9 05:00:24 np0005478303 systemd[1]: Started dnf makecache --timer.
Oct  9 05:00:24 np0005478303 systemd[1]: Started Daily rotation of log files.
Oct  9 05:00:24 np0005478303 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  9 05:00:24 np0005478303 systemd[1]: Reached target Timer Units.
Oct  9 05:00:24 np0005478303 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  9 05:00:24 np0005478303 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  9 05:00:24 np0005478303 systemd[1]: Reached target Socket Units.
Oct  9 05:00:24 np0005478303 systemd[1]: Starting D-Bus System Message Bus...
Oct  9 05:00:24 np0005478303 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 05:00:24 np0005478303 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 05:00:24 np0005478303 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 05:00:24 np0005478303 systemd-udevd[711]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:00:24 np0005478303 systemd[1]: Started D-Bus System Message Bus.
Oct  9 05:00:24 np0005478303 systemd[1]: Reached target Basic System.
Oct  9 05:00:24 np0005478303 dbus-broker-lau[724]: Ready
Oct  9 05:00:24 np0005478303 systemd[1]: Starting NTP client/server...
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  9 05:00:24 np0005478303 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  9 05:00:24 np0005478303 systemd[1]: Starting IPv4 firewall with iptables...
Oct  9 05:00:24 np0005478303 systemd[1]: Started irqbalance daemon.
Oct  9 05:00:24 np0005478303 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  9 05:00:24 np0005478303 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 05:00:24 np0005478303 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 05:00:24 np0005478303 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 05:00:24 np0005478303 systemd[1]: Reached target sshd-keygen.target.
Oct  9 05:00:24 np0005478303 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  9 05:00:24 np0005478303 systemd[1]: Reached target User and Group Name Lookups.
Oct  9 05:00:24 np0005478303 systemd[1]: Starting User Login Management...
Oct  9 05:00:24 np0005478303 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  9 05:00:24 np0005478303 chronyd[753]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 05:00:24 np0005478303 chronyd[753]: Loaded 0 symmetric keys
Oct  9 05:00:24 np0005478303 chronyd[753]: Using right/UTC timezone to obtain leap second data
Oct  9 05:00:24 np0005478303 chronyd[753]: Loaded seccomp filter (level 2)
Oct  9 05:00:24 np0005478303 systemd[1]: Started NTP client/server.
Oct  9 05:00:24 np0005478303 systemd-logind[745]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 05:00:24 np0005478303 systemd-logind[745]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 05:00:24 np0005478303 systemd-logind[745]: New seat seat0.
Oct  9 05:00:24 np0005478303 systemd[1]: Started User Login Management.
Oct  9 05:00:24 np0005478303 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Oct  9 05:00:24 np0005478303 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  9 05:00:24 np0005478303 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Oct  9 05:00:24 np0005478303 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  9 05:00:24 np0005478303 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  9 05:00:24 np0005478303 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  9 05:00:24 np0005478303 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  9 05:00:25 np0005478303 iptables.init[738]: iptables: Applying firewall rules: [  OK  ]
Oct  9 05:00:25 np0005478303 systemd[1]: Finished IPv4 firewall with iptables.
Oct  9 05:00:25 np0005478303 kernel: iTCO_vendor_support: vendor-support=0
Oct  9 05:00:25 np0005478303 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Oct  9 05:00:25 np0005478303 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Oct  9 05:00:25 np0005478303 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Oct  9 05:00:25 np0005478303 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Oct  9 05:00:25 np0005478303 kernel: Console: switching to colour dummy device 80x25
Oct  9 05:00:25 np0005478303 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  9 05:00:25 np0005478303 kernel: [drm] features: -context_init
Oct  9 05:00:25 np0005478303 kernel: [drm] number of scanouts: 1
Oct  9 05:00:25 np0005478303 kernel: [drm] number of cap sets: 0
Oct  9 05:00:25 np0005478303 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Oct  9 05:00:25 np0005478303 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  9 05:00:25 np0005478303 kernel: Console: switching to colour frame buffer device 160x50
Oct  9 05:00:25 np0005478303 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  9 05:00:25 np0005478303 kernel: kvm_amd: TSC scaling supported
Oct  9 05:00:25 np0005478303 kernel: kvm_amd: Nested Virtualization enabled
Oct  9 05:00:25 np0005478303 kernel: kvm_amd: Nested Paging enabled
Oct  9 05:00:25 np0005478303 kernel: kvm_amd: LBR virtualization supported
Oct  9 05:00:25 np0005478303 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Oct  9 05:00:25 np0005478303 kernel: kvm_amd: Virtual GIF supported
Oct  9 05:00:25 np0005478303 cloud-init[792]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 09:00:25 +0000. Up 4.94 seconds.
Oct  9 05:00:25 np0005478303 systemd[1]: run-cloud\x2dinit-tmp-tmpos6kurhj.mount: Deactivated successfully.
Oct  9 05:00:25 np0005478303 systemd[1]: Starting Hostname Service...
Oct  9 05:00:25 np0005478303 systemd[1]: Started Hostname Service.
Oct  9 05:00:25 np0005478303 systemd-hostnamed[806]: Hostname set to <np0005478303> (static)
Oct  9 05:00:25 np0005478303 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  9 05:00:25 np0005478303 systemd[1]: Reached target Preparation for Network.
Oct  9 05:00:25 np0005478303 systemd[1]: Starting Network Manager...
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7321] NetworkManager (version 1.54.1-1.el9) is starting... (boot:e0b7fcdd-1586-415d-8058-c87bd65cc6fe)
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7325] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7438] manager[0x56480fa44080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7474] hostname: hostname: using hostnamed
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7474] hostname: static hostname changed from (none) to "np0005478303"
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7478] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7573] manager[0x56480fa44080]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7573] manager[0x56480fa44080]: rfkill: WWAN hardware radio set enabled
Oct  9 05:00:25 np0005478303 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7617] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7617] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7617] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7618] manager: Networking is enabled by state file
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7619] settings: Loaded settings plugin: keyfile (internal)
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7639] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7657] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7675] dhcp: init: Using DHCP client 'internal'
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7677] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7689] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7698] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7703] device (lo): Activation: starting connection 'lo' (536fd1ae-144f-4da0-bdc6-b373fcef3967)
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7712] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7715] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:00:25 np0005478303 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7737] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7740] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7742] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 05:00:25 np0005478303 systemd[1]: Started Network Manager.
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7743] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7744] device (eth0): carrier: link connected
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7746] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7751] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 05:00:25 np0005478303 systemd[1]: Reached target Network.
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7757] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7761] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7761] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7763] manager: NetworkManager state is now CONNECTING
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7763] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7768] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7772] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7775] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Oct  9 05:00:25 np0005478303 systemd[1]: Starting Network Manager Wait Online...
Oct  9 05:00:25 np0005478303 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7829] dhcp4 (eth0): state changed new lease, address=192.168.26.45
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7835] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 05:00:25 np0005478303 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7887] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7891] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 05:00:25 np0005478303 NetworkManager[810]: <info>  [1760000425.7898] device (lo): Activation: successful, device activated.
Oct  9 05:00:25 np0005478303 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  9 05:00:25 np0005478303 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 05:00:25 np0005478303 systemd[1]: Reached target NFS client services.
Oct  9 05:00:25 np0005478303 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 05:00:25 np0005478303 systemd[1]: Reached target Remote File Systems.
Oct  9 05:00:25 np0005478303 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 05:00:27 np0005478303 NetworkManager[810]: <info>  [1760000427.4822] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:00:28 np0005478303 NetworkManager[810]: <info>  [1760000428.5007] dhcp6 (eth0): state changed new lease, address=2001:db8::24
Oct  9 05:00:30 np0005478303 NetworkManager[810]: <info>  [1760000430.1707] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:00:30 np0005478303 NetworkManager[810]: <info>  [1760000430.1740] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:00:30 np0005478303 NetworkManager[810]: <info>  [1760000430.1741] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:00:30 np0005478303 NetworkManager[810]: <info>  [1760000430.1744] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:00:30 np0005478303 NetworkManager[810]: <info>  [1760000430.1747] device (eth0): Activation: successful, device activated.
Oct  9 05:00:30 np0005478303 NetworkManager[810]: <info>  [1760000430.1751] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 05:00:30 np0005478303 NetworkManager[810]: <info>  [1760000430.1753] manager: startup complete
Oct  9 05:00:30 np0005478303 systemd[1]: Finished Network Manager Wait Online.
Oct  9 05:00:30 np0005478303 systemd[1]: Starting Cloud-init: Network Stage...
Oct  9 05:00:30 np0005478303 cloud-init[876]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 09:00:30 +0000. Up 10.04 seconds.
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |  eth0  | True |        192.168.26.45         | 255.255.255.0 | global | fa:16:3e:c5:2c:a9 |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |  eth0  | True |       2001:db8::24/128       |       .       | global | fa:16:3e:c5:2c:a9 |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |  eth0  | True | fe80::f816:3eff:fec5:2ca9/64 |       .       |  link  | fa:16:3e:c5:2c:a9 |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   2   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +-------+--------------+-------------+-----------+-------+
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: | Route | Destination  |   Gateway   | Interface | Flags |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +-------+--------------+-------------+-----------+-------+
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   1   | 2001:db8::1  |      ::     |    eth0   |   U   |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   2   | 2001:db8::24 |      ::     |    eth0   |   U   |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   3   |  fe80::/64   |      ::     |    eth0   |   U   |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   4   |     ::/0     | 2001:db8::1 |    eth0   |   UG  |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   6   |    local     |      ::     |    eth0   |   U   |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   7   |    local     |      ::     |    eth0   |   U   |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: |   8   |  multicast   |      ::     |    eth0   |   U   |
Oct  9 05:00:30 np0005478303 cloud-init[876]: ci-info: +-------+--------------+-------------+-----------+-------+
Oct  9 05:00:30 np0005478303 chronyd[753]: Selected source 66.59.198.94 (2.centos.pool.ntp.org)
Oct  9 05:00:30 np0005478303 chronyd[753]: System clock TAI offset set to 37 seconds
Oct  9 05:00:31 np0005478303 cloud-init[876]: Generating public/private rsa key pair.
Oct  9 05:00:31 np0005478303 cloud-init[876]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  9 05:00:31 np0005478303 cloud-init[876]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  9 05:00:31 np0005478303 cloud-init[876]: The key fingerprint is:
Oct  9 05:00:31 np0005478303 cloud-init[876]: SHA256:qIvzSJ4z2OwNxxKALmXLulghD6qVzn25BjhyT2XIb+E root@np0005478303
Oct  9 05:00:31 np0005478303 cloud-init[876]: The key's randomart image is:
Oct  9 05:00:31 np0005478303 cloud-init[876]: +---[RSA 3072]----+
Oct  9 05:00:31 np0005478303 cloud-init[876]: |                 |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |.                |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |o o. .           |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |.= .o +.         |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |+.=. =..S        |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |+=+=o.E          |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |o*O++o .         |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |+O*X..+          |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |+.O==o..         |
Oct  9 05:00:31 np0005478303 cloud-init[876]: +----[SHA256]-----+
Oct  9 05:00:31 np0005478303 cloud-init[876]: Generating public/private ecdsa key pair.
Oct  9 05:00:31 np0005478303 cloud-init[876]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  9 05:00:31 np0005478303 cloud-init[876]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  9 05:00:31 np0005478303 cloud-init[876]: The key fingerprint is:
Oct  9 05:00:31 np0005478303 cloud-init[876]: SHA256:W+3tUDh7kARI2Zgbg0SJC3J8dCbXsgJuZvjeZDo/BgM root@np0005478303
Oct  9 05:00:31 np0005478303 cloud-init[876]: The key's randomart image is:
Oct  9 05:00:31 np0005478303 cloud-init[876]: +---[ECDSA 256]---+
Oct  9 05:00:31 np0005478303 cloud-init[876]: | . .o+*=.*.      |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |. = o=+ O ..     |
Oct  9 05:00:31 np0005478303 cloud-init[876]: | = + . o +  .    |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |E = o . .  o o   |
Oct  9 05:00:31 np0005478303 cloud-init[876]: | *   .  S . * .  |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |  + o    o . *   |
Oct  9 05:00:31 np0005478303 cloud-init[876]: | . B    .   + o  |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |  + +        +   |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |   +..        .  |
Oct  9 05:00:31 np0005478303 cloud-init[876]: +----[SHA256]-----+
Oct  9 05:00:31 np0005478303 cloud-init[876]: Generating public/private ed25519 key pair.
Oct  9 05:00:31 np0005478303 cloud-init[876]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  9 05:00:31 np0005478303 cloud-init[876]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  9 05:00:31 np0005478303 cloud-init[876]: The key fingerprint is:
Oct  9 05:00:31 np0005478303 cloud-init[876]: SHA256:l6/u5LDRZoTeiIpfWp/9LL9I1qX2VuqUsxWGIUe77zM root@np0005478303
Oct  9 05:00:31 np0005478303 cloud-init[876]: The key's randomart image is:
Oct  9 05:00:31 np0005478303 cloud-init[876]: +--[ED25519 256]--+
Oct  9 05:00:31 np0005478303 cloud-init[876]: |             .   |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |            . .  |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |           . +   |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |         . .o +  |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |        S +  o.o |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |       o * o oo.o|
Oct  9 05:00:31 np0005478303 cloud-init[876]: |      + = O = ++.|
Oct  9 05:00:31 np0005478303 cloud-init[876]: |   . = . &.= o+E |
Oct  9 05:00:31 np0005478303 cloud-init[876]: |  ..+   +oB+=++.o|
Oct  9 05:00:31 np0005478303 cloud-init[876]: +----[SHA256]-----+
Oct  9 05:00:31 np0005478303 systemd[1]: Finished Cloud-init: Network Stage.
Oct  9 05:00:31 np0005478303 systemd[1]: Reached target Cloud-config availability.
Oct  9 05:00:31 np0005478303 systemd[1]: Reached target Network is Online.
Oct  9 05:00:31 np0005478303 systemd[1]: Starting Cloud-init: Config Stage...
Oct  9 05:00:31 np0005478303 systemd[1]: Starting Notify NFS peers of a restart...
Oct  9 05:00:31 np0005478303 systemd[1]: Starting System Logging Service...
Oct  9 05:00:31 np0005478303 sm-notify[958]: Version 2.5.4 starting
Oct  9 05:00:31 np0005478303 systemd[1]: Starting OpenSSH server daemon...
Oct  9 05:00:31 np0005478303 systemd[1]: Starting Permit User Sessions...
Oct  9 05:00:31 np0005478303 systemd[1]: Started OpenSSH server daemon.
Oct  9 05:00:31 np0005478303 systemd[1]: Started Notify NFS peers of a restart.
Oct  9 05:00:31 np0005478303 systemd[1]: Finished Permit User Sessions.
Oct  9 05:00:31 np0005478303 systemd[1]: Started Command Scheduler.
Oct  9 05:00:31 np0005478303 systemd[1]: Started Getty on tty1.
Oct  9 05:00:31 np0005478303 systemd[1]: Started Serial Getty on ttyS0.
Oct  9 05:00:31 np0005478303 systemd[1]: Reached target Login Prompts.
Oct  9 05:00:31 np0005478303 rsyslogd[959]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="959" x-info="https://www.rsyslog.com"] start
Oct  9 05:00:31 np0005478303 rsyslogd[959]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  9 05:00:31 np0005478303 systemd[1]: Started System Logging Service.
Oct  9 05:00:31 np0005478303 systemd[1]: Reached target Multi-User System.
Oct  9 05:00:31 np0005478303 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  9 05:00:31 np0005478303 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  9 05:00:31 np0005478303 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  9 05:00:31 np0005478303 rsyslogd[959]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 05:00:31 np0005478303 cloud-init[972]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 09:00:31 +0000. Up 11.25 seconds.
Oct  9 05:00:31 np0005478303 systemd[1]: Finished Cloud-init: Config Stage.
Oct  9 05:00:31 np0005478303 systemd[1]: Starting Cloud-init: Final Stage...
Oct  9 05:00:31 np0005478303 cloud-init[976]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 09:00:31 +0000. Up 11.57 seconds.
Oct  9 05:00:32 np0005478303 cloud-init[978]: #############################################################
Oct  9 05:00:32 np0005478303 cloud-init[979]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  9 05:00:32 np0005478303 cloud-init[982]: 256 SHA256:W+3tUDh7kARI2Zgbg0SJC3J8dCbXsgJuZvjeZDo/BgM root@np0005478303 (ECDSA)
Oct  9 05:00:32 np0005478303 cloud-init[985]: 256 SHA256:l6/u5LDRZoTeiIpfWp/9LL9I1qX2VuqUsxWGIUe77zM root@np0005478303 (ED25519)
Oct  9 05:00:32 np0005478303 cloud-init[987]: 3072 SHA256:qIvzSJ4z2OwNxxKALmXLulghD6qVzn25BjhyT2XIb+E root@np0005478303 (RSA)
Oct  9 05:00:32 np0005478303 cloud-init[989]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  9 05:00:32 np0005478303 cloud-init[990]: #############################################################
Oct  9 05:00:32 np0005478303 cloud-init[976]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 09:00:32 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.71 seconds
Oct  9 05:00:32 np0005478303 systemd[1]: Finished Cloud-init: Final Stage.
Oct  9 05:00:32 np0005478303 systemd[1]: Reached target Cloud-init target.
Oct  9 05:00:32 np0005478303 systemd[1]: Startup finished in 1.408s (kernel) + 2.004s (initrd) + 8.355s (userspace) = 11.768s.
Oct  9 05:00:35 np0005478303 irqbalance[740]: Cannot change IRQ 45 affinity: Operation not permitted
Oct  9 05:00:35 np0005478303 irqbalance[740]: IRQ 45 affinity is now unmanaged
Oct  9 05:00:35 np0005478303 irqbalance[740]: Cannot change IRQ 44 affinity: Operation not permitted
Oct  9 05:00:35 np0005478303 irqbalance[740]: IRQ 44 affinity is now unmanaged
Oct  9 05:00:35 np0005478303 irqbalance[740]: Cannot change IRQ 42 affinity: Operation not permitted
Oct  9 05:00:35 np0005478303 irqbalance[740]: IRQ 42 affinity is now unmanaged
Oct  9 05:00:40 np0005478303 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:00:55 np0005478303 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:01:02 np0005478303 systemd-logind[745]: New session 1 of user zuul.
Oct  9 05:01:02 np0005478303 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 05:01:02 np0005478303 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 05:01:02 np0005478303 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 05:01:02 np0005478303 systemd[1]: Starting User Manager for UID 1000...
Oct  9 05:01:02 np0005478303 systemd[1030]: Queued start job for default target Main User Target.
Oct  9 05:01:02 np0005478303 systemd[1030]: Created slice User Application Slice.
Oct  9 05:01:02 np0005478303 systemd[1030]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 05:01:02 np0005478303 systemd[1030]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 05:01:02 np0005478303 systemd[1030]: Reached target Paths.
Oct  9 05:01:02 np0005478303 systemd[1030]: Reached target Timers.
Oct  9 05:01:02 np0005478303 systemd[1030]: Starting D-Bus User Message Bus Socket...
Oct  9 05:01:02 np0005478303 systemd[1030]: Starting Create User's Volatile Files and Directories...
Oct  9 05:01:02 np0005478303 systemd[1030]: Listening on D-Bus User Message Bus Socket.
Oct  9 05:01:02 np0005478303 systemd[1030]: Reached target Sockets.
Oct  9 05:01:02 np0005478303 systemd[1030]: Finished Create User's Volatile Files and Directories.
Oct  9 05:01:02 np0005478303 systemd[1030]: Reached target Basic System.
Oct  9 05:01:02 np0005478303 systemd[1030]: Reached target Main User Target.
Oct  9 05:01:02 np0005478303 systemd[1030]: Startup finished in 85ms.
Oct  9 05:01:02 np0005478303 systemd[1]: Started User Manager for UID 1000.
Oct  9 05:01:02 np0005478303 systemd[1]: Started Session 1 of User zuul.
Oct  9 05:01:02 np0005478303 python3[1112]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:05 np0005478303 python3[1140]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:10 np0005478303 python3[1194]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:11 np0005478303 python3[1234]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  9 05:01:13 np0005478303 python3[1260]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJHvXKF+OC4TiCL/aa/o6rq9+SFP7bwIAGJR40fwDShswdP6EsCB3q74rxa7HZk7nAlq9GsqcvEMBnmYvXZUScuzDatbNHHj3L31gOIlnhwqJ+iI2XdTfBbmIf8ccHDrx1xB3Hr6l9Q5eqR06BX9lfG4zf0ZMnKgwxfT7bXERv1O989RrexR2EoG/yjbB1iGKYDIvULj9yB/Lzd91Yva830/7KuOe3mZkeUMPkp7g4dMGF7POukU3bb+UgETc+cweFS+cE2oeZeFxj6d6jKBDkpWNKLJcng32oQUvkUbS53tMgPVCo75ZmBtWas4DZeuhJOIo5dD1eFlOVaBAP+38K/N68/C4UkR/HKomLSssPXAmV6MLWoDu9thuzfr8bgmyZT4hnBveyALdASAffBpfuv8R/2Z6K/F7FIDgew4RyZcKyQjOvsxPqfI+6+Jq4hxxOiGGLQmKsHF+T/crR7fIS8NKaqRy/QwezRy5WD56EvUh4/y9u3fKQK8uVbRdYHb0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:13 np0005478303 python3[1284]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:13 np0005478303 python3[1383]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:14 np0005478303 python3[1454]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760000473.6910553-252-159126466472830/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=a671a8077fb34b76835f3572668f1b22_id_rsa follow=False checksum=c7f5caef86df45fcb47abb858beda9b774bf09c9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:14 np0005478303 python3[1577]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:14 np0005478303 python3[1648]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760000474.3002846-307-91265238338978/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=a671a8077fb34b76835f3572668f1b22_id_rsa.pub follow=False checksum=81cf534faaee7eab1d192c4cf78a7f0119953204 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:15 np0005478303 python3[1696]: ansible-ping Invoked with data=pong
Oct  9 05:01:16 np0005478303 python3[1720]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:18 np0005478303 python3[1774]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  9 05:01:19 np0005478303 python3[1806]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478303 python3[1830]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478303 python3[1854]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478303 python3[1878]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478303 python3[1902]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:20 np0005478303 python3[1926]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:21 np0005478303 python3[1952]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:21 np0005478303 python3[2030]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:22 np0005478303 python3[2103]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760000481.5985785-32-206867854264396/source follow=False _original_basename=mirror_info.sh.j2 checksum=3f92644b791816833989d215b9a84c589a7b8ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:22 np0005478303 python3[2151]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:22 np0005478303 python3[2175]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478303 python3[2199]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478303 python3[2223]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478303 python3[2247]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478303 python3[2271]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478303 python3[2295]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478303 python3[2319]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478303 python3[2343]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478303 python3[2367]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478303 python3[2391]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478303 python3[2415]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478303 python3[2439]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478303 python3[2463]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478303 python3[2487]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478303 python3[2511]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478303 python3[2535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478303 python3[2559]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478303 python3[2583]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478303 python3[2607]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478303 python3[2631]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478303 python3[2655]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478303 python3[2679]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478303 python3[2703]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478303 python3[2727]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478303 python3[2751]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:30 np0005478303 python3[2777]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  9 05:01:30 np0005478303 systemd[1]: Starting Time & Date Service...
Oct  9 05:01:30 np0005478303 systemd[1]: Started Time & Date Service.
Oct  9 05:01:30 np0005478303 systemd-timedated[2779]: Changed time zone to 'UTC' (UTC).
Oct  9 05:01:30 np0005478303 python3[2808]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:30 np0005478303 python3[2884]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:31 np0005478303 python3[2955]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760000490.6554585-253-80198276351615/source _original_basename=tmpgrzmcju2 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:31 np0005478303 python3[3055]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:31 np0005478303 python3[3126]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760000491.2372131-302-137551980605681/source _original_basename=tmp31x50rya follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:32 np0005478303 python3[3228]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:32 np0005478303 python3[3301]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760000492.1120067-382-149210164415392/source _original_basename=tmpd_yazw3x follow=False checksum=a89700e4c48a4c62ffc6b2e1dd207b0e445fb30f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:32 np0005478303 python3[3349]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:01:33 np0005478303 python3[3375]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:01:33 np0005478303 python3[3455]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:33 np0005478303 python3[3528]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760000493.2728539-452-165680704209217/source _original_basename=tmpmjfke0tc follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:34 np0005478303 python3[3579]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e08-49e2-22a3-075b-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:01:34 np0005478303 python3[3607]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e08-49e2-22a3-075b-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  9 05:01:35 np0005478303 irqbalance[740]: Cannot change IRQ 43 affinity: Operation not permitted
Oct  9 05:01:35 np0005478303 irqbalance[740]: IRQ 43 affinity is now unmanaged
Oct  9 05:01:36 np0005478303 python3[3635]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:51 np0005478303 python3[3661]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:02:00 np0005478303 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 05:02:37 np0005478303 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 05:02:37 np0005478303 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  9 05:02:37 np0005478303 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  9 05:02:37 np0005478303 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Oct  9 05:02:37 np0005478303 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Oct  9 05:02:37 np0005478303 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Oct  9 05:02:37 np0005478303 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Oct  9 05:02:37 np0005478303 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7015] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 05:02:37 np0005478303 systemd-udevd[3664]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7248] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7264] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7266] device (eth1): carrier: link connected
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7267] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7271] policy: auto-activating connection 'Wired connection 1' (168464f5-1301-3221-87ae-be62d8e7a219)
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7274] device (eth1): Activation: starting connection 'Wired connection 1' (168464f5-1301-3221-87ae-be62d8e7a219)
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7274] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7275] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7278] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:02:37 np0005478303 NetworkManager[810]: <info>  [1760000557.7281] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:38 np0005478303 python3[3691]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e08-49e2-3fb7-b15f-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:02:47 np0005478303 python3[3771]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:02:48 np0005478303 python3[3844]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760000567.7743733-161-23048714921307/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=712b3da493efdef94cbfd49f78c965ed6b1186cc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:02:48 np0005478303 python3[3894]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:02:48 np0005478303 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  9 05:02:48 np0005478303 systemd[1]: Stopped Network Manager Wait Online.
Oct  9 05:02:48 np0005478303 systemd[1]: Stopping Network Manager Wait Online...
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5776] caught SIGTERM, shutting down normally.
Oct  9 05:02:48 np0005478303 systemd[1]: Stopping Network Manager...
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5781] dhcp4 (eth0): canceled DHCP transaction
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5781] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5781] dhcp4 (eth0): state changed no lease
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5782] dhcp6 (eth0): canceled DHCP transaction
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5782] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5782] dhcp6 (eth0): state changed no lease
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5783] manager: NetworkManager state is now CONNECTING
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5937] dhcp4 (eth1): canceled DHCP transaction
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5937] dhcp4 (eth1): state changed no lease
Oct  9 05:02:48 np0005478303 NetworkManager[810]: <info>  [1760000568.5959] exiting (success)
Oct  9 05:02:48 np0005478303 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:02:48 np0005478303 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:02:48 np0005478303 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  9 05:02:48 np0005478303 systemd[1]: Stopped Network Manager.
Oct  9 05:02:48 np0005478303 systemd[1]: Starting Network Manager...
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6283] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e0b7fcdd-1586-415d-8058-c87bd65cc6fe)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6284] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6322] manager[0x5579ab060090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 05:02:48 np0005478303 systemd[1]: Starting Hostname Service...
Oct  9 05:02:48 np0005478303 systemd[1]: Started Hostname Service.
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6876] hostname: hostname: using hostnamed
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6876] hostname: static hostname changed from (none) to "np0005478303"
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6879] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6882] manager[0x5579ab060090]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6882] manager[0x5579ab060090]: rfkill: WWAN hardware radio set enabled
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6900] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6900] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6901] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6901] manager: Networking is enabled by state file
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6903] settings: Loaded settings plugin: keyfile (internal)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6906] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6924] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6930] dhcp: init: Using DHCP client 'internal'
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6932] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6936] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6940] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6945] device (lo): Activation: starting connection 'lo' (536fd1ae-144f-4da0-bdc6-b373fcef3967)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6950] device (eth0): carrier: link connected
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6953] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6957] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6957] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6961] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6966] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6970] device (eth1): carrier: link connected
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6973] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6977] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (168464f5-1301-3221-87ae-be62d8e7a219) (indicated)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6977] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6980] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6985] device (eth1): Activation: starting connection 'Wired connection 1' (168464f5-1301-3221-87ae-be62d8e7a219)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6989] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 05:02:48 np0005478303 systemd[1]: Started Network Manager.
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6992] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6993] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6996] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6998] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.6999] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7000] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7002] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7003] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7009] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7011] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7012] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7014] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7020] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7024] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7041] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7042] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7045] device (lo): Activation: successful, device activated.
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7049] dhcp4 (eth0): state changed new lease, address=192.168.26.45
Oct  9 05:02:48 np0005478303 NetworkManager[3905]: <info>  [1760000568.7053] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 05:02:48 np0005478303 systemd[1]: Starting Network Manager Wait Online...
Oct  9 05:02:48 np0005478303 python3[3966]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e08-49e2-3fb7-b15f-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:02:49 np0005478303 NetworkManager[3905]: <info>  [1760000569.8057] dhcp6 (eth0): state changed new lease, address=2001:db8::24
Oct  9 05:02:49 np0005478303 NetworkManager[3905]: <info>  [1760000569.8065] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:02:49 np0005478303 NetworkManager[3905]: <info>  [1760000569.8086] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:02:49 np0005478303 NetworkManager[3905]: <info>  [1760000569.8087] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:02:49 np0005478303 NetworkManager[3905]: <info>  [1760000569.8090] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:02:49 np0005478303 NetworkManager[3905]: <info>  [1760000569.8092] device (eth0): Activation: successful, device activated.
Oct  9 05:02:49 np0005478303 NetworkManager[3905]: <info>  [1760000569.8095] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 05:02:59 np0005478303 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:03:18 np0005478303 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:03:20 np0005478303 systemd[1030]: Starting Mark boot as successful...
Oct  9 05:03:20 np0005478303 systemd[1030]: Finished Mark boot as successful.
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3655] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:03:34 np0005478303 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:03:34 np0005478303 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3866] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3867] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3871] device (eth1): Activation: successful, device activated.
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3874] manager: startup complete
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3875] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <warn>  [1760000614.3877] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3887] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  9 05:03:34 np0005478303 systemd[1]: Finished Network Manager Wait Online.
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3956] dhcp4 (eth1): canceled DHCP transaction
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3956] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3956] dhcp4 (eth1): state changed no lease
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3964] policy: auto-activating connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3967] device (eth1): Activation: starting connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3967] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3969] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3972] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3977] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3995] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.3997] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:03:34 np0005478303 NetworkManager[3905]: <info>  [1760000614.4000] device (eth1): Activation: successful, device activated.
Oct  9 05:03:44 np0005478303 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:03:49 np0005478303 systemd-logind[745]: Session 1 logged out. Waiting for processes to exit.
Oct  9 05:03:55 np0005478303 systemd-logind[745]: New session 3 of user zuul.
Oct  9 05:03:55 np0005478303 systemd[1]: Started Session 3 of User zuul.
Oct  9 05:03:55 np0005478303 python3[4095]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:03:55 np0005478303 python3[4168]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760000635.1838768-379-74234479220539/source _original_basename=tmpvebl56l2 follow=False checksum=26ebf755fae5a80bfc5f098245c8908b029e5df9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:03:57 np0005478303 systemd[1]: session-3.scope: Deactivated successfully.
Oct  9 05:03:57 np0005478303 systemd-logind[745]: Session 3 logged out. Waiting for processes to exit.
Oct  9 05:03:57 np0005478303 systemd-logind[745]: Removed session 3.
Oct  9 05:06:20 np0005478303 systemd[1030]: Created slice User Background Tasks Slice.
Oct  9 05:06:20 np0005478303 systemd[1030]: Starting Cleanup of User's Temporary Files and Directories...
Oct  9 05:06:20 np0005478303 systemd[1030]: Finished Cleanup of User's Temporary Files and Directories.
Oct  9 05:08:54 np0005478303 systemd-logind[745]: New session 4 of user zuul.
Oct  9 05:08:54 np0005478303 systemd[1]: Started Session 4 of User zuul.
Oct  9 05:08:54 np0005478303 python3[4227]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e08-49e2-2dac-3627-000000001cfc-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:54 np0005478303 python3[4256]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:55 np0005478303 python3[4282]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:55 np0005478303 python3[4308]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:55 np0005478303 python3[4334]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:56 np0005478303 python3[4360]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:56 np0005478303 python3[4360]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  9 05:08:56 np0005478303 python3[4386]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 05:08:56 np0005478303 systemd[1]: Reloading.
Oct  9 05:08:56 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:08:58 np0005478303 python3[4442]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  9 05:08:58 np0005478303 python3[4468]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:58 np0005478303 python3[4496]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:58 np0005478303 python3[4524]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:59 np0005478303 python3[4552]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:59 np0005478303 python3[4579]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e08-49e2-2dac-3627-000000001d02-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:09:00 np0005478303 python3[4609]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:09:02 np0005478303 systemd[1]: session-4.scope: Deactivated successfully.
Oct  9 05:09:02 np0005478303 systemd[1]: session-4.scope: Consumed 2.412s CPU time.
Oct  9 05:09:02 np0005478303 systemd-logind[745]: Session 4 logged out. Waiting for processes to exit.
Oct  9 05:09:02 np0005478303 systemd-logind[745]: Removed session 4.
Oct  9 05:09:04 np0005478303 systemd-logind[745]: New session 5 of user zuul.
Oct  9 05:09:04 np0005478303 systemd[1]: Started Session 5 of User zuul.
Oct  9 05:09:04 np0005478303 python3[4643]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 05:09:46 np0005478303 kernel: SELinux:  Converting 365 SID table entries...
Oct  9 05:09:46 np0005478303 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:09:46 np0005478303 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:09:46 np0005478303 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:09:46 np0005478303 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:09:46 np0005478303 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:09:46 np0005478303 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:09:46 np0005478303 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:09:53 np0005478303 kernel: SELinux:  Converting 365 SID table entries...
Oct  9 05:09:53 np0005478303 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:09:53 np0005478303 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:09:53 np0005478303 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:09:53 np0005478303 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:09:53 np0005478303 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:09:53 np0005478303 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:09:53 np0005478303 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:09:59 np0005478303 kernel: SELinux:  Converting 365 SID table entries...
Oct  9 05:09:59 np0005478303 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:09:59 np0005478303 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:09:59 np0005478303 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:09:59 np0005478303 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:09:59 np0005478303 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:09:59 np0005478303 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:09:59 np0005478303 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:10:00 np0005478303 setsebool[4732]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  9 05:10:00 np0005478303 setsebool[4732]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  9 05:10:09 np0005478303 kernel: SELinux:  Converting 368 SID table entries...
Oct  9 05:10:09 np0005478303 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:10:09 np0005478303 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:10:09 np0005478303 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:10:09 np0005478303 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:10:09 np0005478303 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:10:09 np0005478303 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:10:09 np0005478303 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:10:21 np0005478303 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 05:10:21 np0005478303 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:10:21 np0005478303 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:10:21 np0005478303 systemd[1]: Reloading.
Oct  9 05:10:21 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:10:21 np0005478303 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:10:22 np0005478303 systemd[1]: Starting PackageKit Daemon...
Oct  9 05:10:22 np0005478303 systemd[1]: Starting Authorization Manager...
Oct  9 05:10:22 np0005478303 polkitd[6643]: Started polkitd version 0.117
Oct  9 05:10:22 np0005478303 systemd[1]: Started Authorization Manager.
Oct  9 05:10:22 np0005478303 systemd[1]: Started PackageKit Daemon.
Oct  9 05:10:26 np0005478303 python3[11520]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e08-49e2-9746-57c4-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:10:27 np0005478303 kernel: evm: overlay not supported
Oct  9 05:10:27 np0005478303 systemd[1030]: Starting D-Bus User Message Bus...
Oct  9 05:10:27 np0005478303 dbus-broker-launch[12275]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  9 05:10:27 np0005478303 dbus-broker-launch[12275]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  9 05:10:27 np0005478303 systemd[1030]: Started D-Bus User Message Bus.
Oct  9 05:10:27 np0005478303 dbus-broker-lau[12275]: Ready
Oct  9 05:10:27 np0005478303 systemd[1030]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 05:10:27 np0005478303 systemd[1030]: Created slice Slice /user.
Oct  9 05:10:27 np0005478303 systemd[1030]: podman-12203.scope: unit configures an IP firewall, but not running as root.
Oct  9 05:10:27 np0005478303 systemd[1030]: (This warning is only shown for the first unit using IP firewalling.)
Oct  9 05:10:27 np0005478303 systemd[1030]: Started podman-12203.scope.
Oct  9 05:10:27 np0005478303 systemd[1030]: Started podman-pause-18edd792.scope.
Oct  9 05:10:28 np0005478303 systemd-logind[745]: Session 5 logged out. Waiting for processes to exit.
Oct  9 05:10:28 np0005478303 systemd[1]: session-5.scope: Deactivated successfully.
Oct  9 05:10:28 np0005478303 systemd[1]: session-5.scope: Consumed 51.650s CPU time.
Oct  9 05:10:28 np0005478303 systemd-logind[745]: Removed session 5.
Oct  9 05:10:45 np0005478303 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:10:45 np0005478303 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:10:45 np0005478303 systemd[1]: man-db-cache-update.service: Consumed 28.757s CPU time.
Oct  9 05:10:45 np0005478303 systemd[1]: run-r7097313dc28d4061bb4d488972192cb4.service: Deactivated successfully.
Oct  9 05:10:50 np0005478303 systemd-logind[745]: New session 6 of user zuul.
Oct  9 05:10:50 np0005478303 systemd[1]: Started Session 6 of User zuul.
Oct  9 05:10:50 np0005478303 python3[26211]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxh/nv6sQLW1yzvGqXNfnJZOZRxYC8qJcgS1V4mG6Ez91eTuQ+QeRIx7PiC27aRMgFhv+XrMbKb0XUoGYd1TGk= zuul@np0005478301#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:10:50 np0005478303 python3[26237]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxh/nv6sQLW1yzvGqXNfnJZOZRxYC8qJcgS1V4mG6Ez91eTuQ+QeRIx7PiC27aRMgFhv+XrMbKb0XUoGYd1TGk= zuul@np0005478301#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:10:51 np0005478303 python3[26263]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005478303 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  9 05:10:51 np0005478303 python3[26297]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxh/nv6sQLW1yzvGqXNfnJZOZRxYC8qJcgS1V4mG6Ez91eTuQ+QeRIx7PiC27aRMgFhv+XrMbKb0XUoGYd1TGk= zuul@np0005478301#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:10:52 np0005478303 python3[26375]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:10:52 np0005478303 python3[26448]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760001052.0879738-153-3905306788632/source _original_basename=tmpghtcmkjl follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:10:53 np0005478303 python3[26498]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Oct  9 05:10:53 np0005478303 systemd[1]: Starting Hostname Service...
Oct  9 05:10:53 np0005478303 systemd[1]: Started Hostname Service.
Oct  9 05:10:54 np0005478303 systemd-hostnamed[26502]: Changed pretty hostname to 'compute-1'
Oct  9 05:10:54 np0005478303 systemd-hostnamed[26502]: Hostname set to <compute-1> (static)
Oct  9 05:10:54 np0005478303 NetworkManager[3905]: <info>  [1760001054.4088] hostname: static hostname changed from "np0005478303" to "compute-1"
Oct  9 05:10:54 np0005478303 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:10:54 np0005478303 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:10:54 np0005478303 systemd-logind[745]: Session 6 logged out. Waiting for processes to exit.
Oct  9 05:10:54 np0005478303 systemd[1]: session-6.scope: Deactivated successfully.
Oct  9 05:10:54 np0005478303 systemd[1]: session-6.scope: Consumed 1.812s CPU time.
Oct  9 05:10:54 np0005478303 systemd-logind[745]: Removed session 6.
Oct  9 05:11:04 np0005478303 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:11:24 np0005478303 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:14:04 np0005478303 systemd-logind[745]: New session 7 of user zuul.
Oct  9 05:14:04 np0005478303 systemd[1]: Started Session 7 of User zuul.
Oct  9 05:14:05 np0005478303 python3[26597]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:14:06 np0005478303 python3[26709]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:07 np0005478303 python3[26782]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4956563-30888-6859995170572/source mode=0755 _original_basename=delorean.repo follow=False checksum=e6ffbe2bc1ecfd38ca5198d3750b43ac3a0e1ed6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:07 np0005478303 python3[26808]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:07 np0005478303 python3[26881]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4956563-30888-6859995170572/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=717d1fa230cffa8c08764d71bd0b4a50d3a90cae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:07 np0005478303 python3[26907]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:07 np0005478303 python3[26980]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4956563-30888-6859995170572/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=8163d09913b97597f86e38eb45c3003e91da783e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:08 np0005478303 python3[27006]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:08 np0005478303 python3[27079]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4956563-30888-6859995170572/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=d108d0750ad5b288ccc41bc6534ea307cc51e987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:08 np0005478303 python3[27105]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:08 np0005478303 python3[27178]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4956563-30888-6859995170572/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=20c3917c672c059a872cf09a437f61890d2f89fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:08 np0005478303 python3[27204]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:09 np0005478303 python3[27277]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4956563-30888-6859995170572/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=4d14f168e8a0e6930d905faffbcdf4fedd6664d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:09 np0005478303 python3[27303]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:09 np0005478303 python3[27376]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4956563-30888-6859995170572/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=75ca8f9fe9a538824fd094f239c30e8ce8652e8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:18 np0005478303 python3[27424]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:15:20 np0005478303 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  9 05:15:20 np0005478303 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  9 05:15:20 np0005478303 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  9 05:15:20 np0005478303 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  9 05:15:27 np0005478303 systemd[1]: packagekit.service: Deactivated successfully.
Oct  9 05:19:18 np0005478303 systemd[1]: session-7.scope: Deactivated successfully.
Oct  9 05:19:18 np0005478303 systemd[1]: session-7.scope: Consumed 3.402s CPU time.
Oct  9 05:19:18 np0005478303 systemd-logind[745]: Session 7 logged out. Waiting for processes to exit.
Oct  9 05:19:18 np0005478303 systemd-logind[745]: Removed session 7.
Oct  9 05:24:26 np0005478303 systemd-logind[745]: New session 8 of user zuul.
Oct  9 05:24:26 np0005478303 systemd[1]: Started Session 8 of User zuul.
Oct  9 05:24:26 np0005478303 python3.9[27586]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:24:27 np0005478303 python3.9[27767]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:24:36 np0005478303 systemd[1]: session-8.scope: Deactivated successfully.
Oct  9 05:24:36 np0005478303 systemd[1]: session-8.scope: Consumed 6.190s CPU time.
Oct  9 05:24:36 np0005478303 systemd-logind[745]: Session 8 logged out. Waiting for processes to exit.
Oct  9 05:24:36 np0005478303 systemd-logind[745]: Removed session 8.
Oct  9 05:24:51 np0005478303 systemd-logind[745]: New session 9 of user zuul.
Oct  9 05:24:51 np0005478303 systemd[1]: Started Session 9 of User zuul.
Oct  9 05:24:51 np0005478303 python3.9[27977]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  9 05:24:52 np0005478303 python3.9[28151]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:24:53 np0005478303 python3.9[28303]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:24:53 np0005478303 python3.9[28456]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:24:54 np0005478303 python3.9[28608]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:24:55 np0005478303 python3.9[28760]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:24:55 np0005478303 python3.9[28883]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760001894.8278515-178-38734312824051/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:24:56 np0005478303 python3.9[29035]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:24:56 np0005478303 python3.9[29191]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:24:57 np0005478303 python3.9[29341]: ansible-ansible.builtin.service_facts Invoked
Oct  9 05:24:59 np0005478303 python3.9[29596]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:25:00 np0005478303 python3.9[29746]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:25:01 np0005478303 python3.9[29900]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:25:02 np0005478303 python3.9[30058]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:25:02 np0005478303 python3.9[30142]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:26:14 np0005478303 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Oct  9 05:26:14 np0005478303 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Oct  9 05:26:14 np0005478303 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Oct  9 05:26:14 np0005478303 dbus-broker-launch[12275]: Noticed file-system modification, trigger reload.
Oct  9 05:26:14 np0005478303 dbus-broker-launch[12275]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  9 05:26:14 np0005478303 dbus-broker-launch[12275]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  9 05:26:14 np0005478303 systemd[1]: Reexecuting.
Oct  9 05:26:14 np0005478303 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 05:26:14 np0005478303 systemd: Detected virtualization kvm.
Oct  9 05:26:14 np0005478303 systemd: Detected architecture x86-64.
Oct  9 05:26:14 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:14 np0005478303 systemd[1]: Reloading.
Oct  9 05:26:14 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:15 np0005478303 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  9 05:26:15 np0005478303 systemd[1]: Reloading.
Oct  9 05:26:15 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:15 np0005478303 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  9 05:26:15 np0005478303 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  9 05:26:16 np0005478303 systemd[1]: Reloading.
Oct  9 05:26:16 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:16 np0005478303 systemd[1]: Starting dnf makecache...
Oct  9 05:26:16 np0005478303 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  9 05:26:16 np0005478303 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Oct  9 05:26:16 np0005478303 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Oct  9 05:26:16 np0005478303 dnf[30515]: Failed determining last makecache time.
Oct  9 05:26:16 np0005478303 dnf[30515]: delorean-openstack-barbican-42b4c41831408a8e323  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:16 np0005478303 dnf[30515]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:16 np0005478303 dnf[30515]: delorean-openstack-cinder-1c00d6490d88e436f26ef  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:16 np0005478303 dnf[30515]: delorean-python-stevedore-c4acc5639fd2329372142  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:17 np0005478303 dnf[30515]: delorean-python-cloudkitty-tests-tempest-3961dc  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:17 np0005478303 dnf[30515]: delorean-diskimage-builder-43381184423c185801b5  18 kB/s | 3.0 kB     00:00
Oct  9 05:26:17 np0005478303 dnf[30515]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:17 np0005478303 dnf[30515]: delorean-python-designate-tests-tempest-347fdbc  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:17 np0005478303 dnf[30515]: delorean-openstack-glance-1fd12c29b339f30fe823e  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:17 np0005478303 dnf[30515]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:18 np0005478303 dnf[30515]: delorean-openstack-manila-3c01b7181572c95dac462  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:18 np0005478303 dnf[30515]: delorean-python-vmware-nsxlib-458234972d1428ac9  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:18 np0005478303 dnf[30515]: delorean-openstack-octavia-ba397f07a7331190208c  18 kB/s | 3.0 kB     00:00
Oct  9 05:26:18 np0005478303 dnf[30515]: delorean-openstack-watcher-c014f81a8647287f6dcc  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:18 np0005478303 dnf[30515]: delorean-edpm-image-builder-55ba53cf215b14ed95b  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:18 np0005478303 dnf[30515]: delorean-puppet-ceph-b0c245ccde541a63fde0564366  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:19 np0005478303 dnf[30515]: delorean-openstack-swift-dc98a8463506ac520c469a  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:19 np0005478303 dnf[30515]: delorean-python-tempestconf-8515371b7cceebd4282  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:19 np0005478303 dnf[30515]: delorean-openstack-heat-ui-013accbfd179753bc3f0  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:20 np0005478303 dnf[30515]: CentOS Stream 9 - BaseOS                        4.4 kB/s | 6.1 kB     00:01
Oct  9 05:26:21 np0005478303 dnf[30515]: CentOS Stream 9 - AppStream                      18 kB/s | 6.5 kB     00:00
Oct  9 05:26:21 np0005478303 dnf[30515]: CentOS Stream 9 - CRB                            14 kB/s | 6.0 kB     00:00
Oct  9 05:26:22 np0005478303 dnf[30515]: CentOS Stream 9 - Extras packages                22 kB/s | 8.0 kB     00:00
Oct  9 05:26:22 np0005478303 dnf[30515]: dlrn-antelope-testing                            21 kB/s | 3.0 kB     00:00
Oct  9 05:26:22 np0005478303 dnf[30515]: dlrn-antelope-build-deps                         20 kB/s | 3.0 kB     00:00
Oct  9 05:26:22 np0005478303 dnf[30515]: centos9-rabbitmq                                7.1 kB/s | 3.0 kB     00:00
Oct  9 05:26:23 np0005478303 dnf[30515]: centos9-storage                                 7.1 kB/s | 3.0 kB     00:00
Oct  9 05:26:23 np0005478303 dnf[30515]: centos9-opstools                                7.1 kB/s | 3.0 kB     00:00
Oct  9 05:26:24 np0005478303 dnf[30515]: NFV SIG OpenvSwitch                             7.1 kB/s | 3.0 kB     00:00
Oct  9 05:26:24 np0005478303 dnf[30515]: repo-setup-centos-appstream                      10 kB/s | 4.4 kB     00:00
Oct  9 05:26:24 np0005478303 dnf[30515]: repo-setup-centos-baseos                        9.1 kB/s | 3.9 kB     00:00
Oct  9 05:26:26 np0005478303 dnf[30515]: repo-setup-centos-highavailability              2.0 kB/s | 3.9 kB     00:01
Oct  9 05:26:27 np0005478303 dnf[30515]: repo-setup-centos-powertools                     10 kB/s | 4.3 kB     00:00
Oct  9 05:26:27 np0005478303 dnf[30515]: Extra Packages for Enterprise Linux 9 - x86_64   77 kB/s |  30 kB     00:00
Oct  9 05:26:28 np0005478303 dnf[30515]: Metadata cache created.
Oct  9 05:26:28 np0005478303 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  9 05:26:28 np0005478303 systemd[1]: Finished dnf makecache.
Oct  9 05:26:28 np0005478303 systemd[1]: dnf-makecache.service: Consumed 1.252s CPU time.
Oct  9 05:27:00 np0005478303 kernel: SELinux:  Converting 2715 SID table entries...
Oct  9 05:27:00 np0005478303 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:27:00 np0005478303 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:27:00 np0005478303 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:27:00 np0005478303 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:27:00 np0005478303 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:27:00 np0005478303 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:27:00 np0005478303 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:27:00 np0005478303 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  9 05:27:01 np0005478303 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:27:01 np0005478303 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:27:01 np0005478303 systemd[1]: Reloading.
Oct  9 05:27:01 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:01 np0005478303 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:27:01 np0005478303 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:27:01 np0005478303 systemd-journald[647]: Journal stopped
Oct  9 05:27:01 np0005478303 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  9 05:27:01 np0005478303 systemd: Stopping Journal Service...
Oct  9 05:27:01 np0005478303 systemd: Stopping Rule-based Manager for Device Events and Files...
Oct  9 05:27:01 np0005478303 systemd: systemd-journald.service: Deactivated successfully.
Oct  9 05:27:01 np0005478303 systemd: Stopped Journal Service.
Oct  9 05:27:01 np0005478303 systemd: Starting Journal Service...
Oct  9 05:27:01 np0005478303 systemd: systemd-udevd.service: Deactivated successfully.
Oct  9 05:27:01 np0005478303 systemd: Stopped Rule-based Manager for Device Events and Files.
Oct  9 05:27:01 np0005478303 systemd: systemd-udevd.service: Consumed 1.286s CPU time.
Oct  9 05:27:01 np0005478303 systemd: Starting Rule-based Manager for Device Events and Files...
Oct  9 05:27:01 np0005478303 systemd-journald[30864]: Journal started
Oct  9 05:27:01 np0005478303 systemd-journald[30864]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:27:01 np0005478303 systemd: Started Journal Service.
Oct  9 05:27:01 np0005478303 systemd-udevd[30873]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 05:27:01 np0005478303 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 05:27:01 np0005478303 systemd[1]: Reloading.
Oct  9 05:27:01 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:01 np0005478303 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:27:02 np0005478303 systemd[1]: Starting PackageKit Daemon...
Oct  9 05:27:02 np0005478303 systemd[1]: Started PackageKit Daemon.
Oct  9 05:27:06 np0005478303 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:27:06 np0005478303 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:27:06 np0005478303 systemd[1]: man-db-cache-update.service: Consumed 6.721s CPU time.
Oct  9 05:27:06 np0005478303 systemd[1]: run-r6decdf0a97a64a4597f2bb6cf08dc35d.service: Deactivated successfully.
Oct  9 05:27:06 np0005478303 systemd[1]: run-rfe776a234b7b4ab3acbba9e994b37941.service: Deactivated successfully.
Oct  9 05:27:08 np0005478303 python3.9[38678]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:09 np0005478303 python3.9[38959]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  9 05:27:10 np0005478303 python3.9[39111]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  9 05:27:12 np0005478303 python3.9[39264]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:27:12 np0005478303 python3.9[39416]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  9 05:27:14 np0005478303 python3.9[39568]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:14 np0005478303 python3.9[39720]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:27:14 np0005478303 python3.9[39843]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002034.1764884-640-241484044841267/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:27:16 np0005478303 python3.9[39995]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  9 05:27:19 np0005478303 python3.9[40148]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 05:27:19 np0005478303 python3.9[40306]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 05:27:19 np0005478303 rsyslogd[959]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 05:27:20 np0005478303 python3.9[40467]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  9 05:27:20 np0005478303 python3.9[40620]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 05:27:21 np0005478303 python3.9[40778]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  9 05:27:22 np0005478303 python3.9[40930]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:27:23 np0005478303 python3.9[41083]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:24 np0005478303 python3.9[41235]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:27:24 np0005478303 python3.9[41358]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002043.8369384-925-251941070414488/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:25 np0005478303 python3.9[41510]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:27:25 np0005478303 systemd[1]: Starting Load Kernel Modules...
Oct  9 05:27:25 np0005478303 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  9 05:27:25 np0005478303 systemd-modules-load[41514]: Inserted module 'br_netfilter'
Oct  9 05:27:25 np0005478303 kernel: Bridge firewalling registered
Oct  9 05:27:25 np0005478303 systemd[1]: Finished Load Kernel Modules.
Oct  9 05:27:25 np0005478303 python3.9[41669]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:27:26 np0005478303 python3.9[41792]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002045.5577226-994-191472630805017/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:26 np0005478303 python3.9[41944]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:27:31 np0005478303 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Oct  9 05:27:31 np0005478303 dbus-broker-launch[724]: Noticed file-system modification, trigger reload.
Oct  9 05:27:31 np0005478303 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:27:31 np0005478303 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:27:31 np0005478303 systemd[1]: Reloading.
Oct  9 05:27:31 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:31 np0005478303 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:27:33 np0005478303 python3.9[45107]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:27:34 np0005478303 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:27:34 np0005478303 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:27:34 np0005478303 systemd[1]: man-db-cache-update.service: Consumed 2.896s CPU time.
Oct  9 05:27:34 np0005478303 systemd[1]: run-r3b2d1b6f1b23403fad2308ef51bbfe80.service: Deactivated successfully.
Oct  9 05:27:34 np0005478303 python3.9[45656]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  9 05:27:35 np0005478303 python3.9[45806]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:27:35 np0005478303 python3.9[45958]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:35 np0005478303 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 05:27:36 np0005478303 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 05:27:36 np0005478303 python3.9[46331]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:27:36 np0005478303 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  9 05:27:36 np0005478303 systemd[1]: tuned.service: Deactivated successfully.
Oct  9 05:27:36 np0005478303 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  9 05:27:36 np0005478303 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 05:27:36 np0005478303 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 05:27:37 np0005478303 python3.9[46492]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  9 05:27:39 np0005478303 python3.9[46644]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:27:40 np0005478303 systemd[1]: Reloading.
Oct  9 05:27:40 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:40 np0005478303 python3.9[46832]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:27:40 np0005478303 systemd[1]: Reloading.
Oct  9 05:27:40 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:41 np0005478303 python3.9[47021]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:41 np0005478303 python3.9[47174]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:41 np0005478303 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  9 05:27:42 np0005478303 python3.9[47327]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:44 np0005478303 python3.9[47489]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:44 np0005478303 python3.9[47642]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:27:44 np0005478303 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 05:27:44 np0005478303 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 05:27:44 np0005478303 systemd[1]: Stopping Apply Kernel Variables...
Oct  9 05:27:44 np0005478303 systemd[1]: Starting Apply Kernel Variables...
Oct  9 05:27:44 np0005478303 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  9 05:27:44 np0005478303 systemd[1]: Finished Apply Kernel Variables.
Oct  9 05:27:45 np0005478303 systemd[1]: session-9.scope: Deactivated successfully.
Oct  9 05:27:45 np0005478303 systemd[1]: session-9.scope: Consumed 1min 40.045s CPU time.
Oct  9 05:27:45 np0005478303 systemd-logind[745]: Session 9 logged out. Waiting for processes to exit.
Oct  9 05:27:45 np0005478303 systemd-logind[745]: Removed session 9.
Oct  9 05:27:50 np0005478303 systemd-logind[745]: New session 10 of user zuul.
Oct  9 05:27:50 np0005478303 systemd[1]: Started Session 10 of User zuul.
Oct  9 05:27:51 np0005478303 python3.9[47825]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:27:52 np0005478303 python3.9[47981]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  9 05:27:52 np0005478303 python3.9[48134]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 05:27:53 np0005478303 python3.9[48292]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 05:27:54 np0005478303 python3.9[48452]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:27:54 np0005478303 python3.9[48536]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 05:28:04 np0005478303 python3.9[48700]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:13 np0005478303 kernel: SELinux:  Converting 2726 SID table entries...
Oct  9 05:28:13 np0005478303 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:28:13 np0005478303 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:28:13 np0005478303 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:28:13 np0005478303 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:28:13 np0005478303 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:28:13 np0005478303 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:28:13 np0005478303 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:28:13 np0005478303 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  9 05:28:13 np0005478303 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  9 05:28:13 np0005478303 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:28:13 np0005478303 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:28:14 np0005478303 systemd[1]: Reloading.
Oct  9 05:28:14 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:14 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:14 np0005478303 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:28:14 np0005478303 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:28:14 np0005478303 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:28:14 np0005478303 systemd[1]: run-r26646dba9ce845109a35cda60ee063cc.service: Deactivated successfully.
Oct  9 05:28:15 np0005478303 python3.9[49801]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 05:28:15 np0005478303 systemd[1]: Reloading.
Oct  9 05:28:15 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:15 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:15 np0005478303 systemd[1]: Starting Open vSwitch Database Unit...
Oct  9 05:28:15 np0005478303 chown[49843]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  9 05:28:15 np0005478303 ovs-ctl[49848]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  9 05:28:15 np0005478303 ovs-ctl[49848]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  9 05:28:15 np0005478303 ovs-ctl[49848]: Starting ovsdb-server [  OK  ]
Oct  9 05:28:15 np0005478303 ovs-vsctl[49897]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  9 05:28:15 np0005478303 ovs-vsctl[49916]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"1479fb1d-afaa-427a-bdce-40294d3573d2\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  9 05:28:15 np0005478303 ovs-ctl[49848]: Configuring Open vSwitch system IDs [  OK  ]
Oct  9 05:28:15 np0005478303 ovs-ctl[49848]: Enabling remote OVSDB managers [  OK  ]
Oct  9 05:28:15 np0005478303 systemd[1]: Started Open vSwitch Database Unit.
Oct  9 05:28:15 np0005478303 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  9 05:28:15 np0005478303 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  9 05:28:15 np0005478303 ovs-vsctl[49932]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct  9 05:28:15 np0005478303 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  9 05:28:15 np0005478303 kernel: openvswitch: Open vSwitch switching datapath
Oct  9 05:28:15 np0005478303 ovs-ctl[49966]: Inserting openvswitch module [  OK  ]
Oct  9 05:28:15 np0005478303 ovs-ctl[49935]: Starting ovs-vswitchd [  OK  ]
Oct  9 05:28:15 np0005478303 ovs-ctl[49935]: Enabling remote OVSDB managers [  OK  ]
Oct  9 05:28:15 np0005478303 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  9 05:28:15 np0005478303 ovs-vsctl[49984]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct  9 05:28:15 np0005478303 systemd[1]: Starting Open vSwitch...
Oct  9 05:28:15 np0005478303 systemd[1]: Finished Open vSwitch.
Oct  9 05:28:16 np0005478303 python3.9[50135]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:28:17 np0005478303 python3.9[50287]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  9 05:28:18 np0005478303 kernel: SELinux:  Converting 2740 SID table entries...
Oct  9 05:28:18 np0005478303 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:28:18 np0005478303 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:28:18 np0005478303 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:28:18 np0005478303 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:28:18 np0005478303 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:28:18 np0005478303 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:28:18 np0005478303 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:28:18 np0005478303 python3.9[50442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:28:19 np0005478303 dbus-broker-launch[734]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  9 05:28:19 np0005478303 python3.9[50600]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:20 np0005478303 python3.9[50753]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:28:22 np0005478303 python3.9[51040]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 05:28:22 np0005478303 python3.9[51190]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:28:23 np0005478303 python3.9[51344]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:25 np0005478303 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:28:25 np0005478303 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:28:26 np0005478303 systemd[1]: Reloading.
Oct  9 05:28:26 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:26 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:26 np0005478303 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:28:26 np0005478303 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:28:26 np0005478303 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:28:26 np0005478303 systemd[1]: run-r6305f301c21340f28b63d032e80f9857.service: Deactivated successfully.
Oct  9 05:28:26 np0005478303 python3.9[51660]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:28:27 np0005478303 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  9 05:28:27 np0005478303 systemd[1]: Stopped Network Manager Wait Online.
Oct  9 05:28:27 np0005478303 systemd[1]: Stopping Network Manager Wait Online...
Oct  9 05:28:27 np0005478303 systemd[1]: Stopping Network Manager...
Oct  9 05:28:27 np0005478303 NetworkManager[3905]: <info>  [1760002107.0097] caught SIGTERM, shutting down normally.
Oct  9 05:28:27 np0005478303 NetworkManager[3905]: <info>  [1760002107.0106] dhcp4 (eth0): canceled DHCP transaction
Oct  9 05:28:27 np0005478303 NetworkManager[3905]: <info>  [1760002107.0106] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:27 np0005478303 NetworkManager[3905]: <info>  [1760002107.0106] dhcp4 (eth0): state changed no lease
Oct  9 05:28:27 np0005478303 NetworkManager[3905]: <info>  [1760002107.0107] dhcp6 (eth0): canceled DHCP transaction
Oct  9 05:28:27 np0005478303 NetworkManager[3905]: <info>  [1760002107.0107] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:27 np0005478303 NetworkManager[3905]: <info>  [1760002107.0107] dhcp6 (eth0): state changed no lease
Oct  9 05:28:27 np0005478303 NetworkManager[3905]: <info>  [1760002107.0109] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:28:27 np0005478303 NetworkManager[3905]: <info>  [1760002107.0141] exiting (success)
Oct  9 05:28:27 np0005478303 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:28:27 np0005478303 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:28:27 np0005478303 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  9 05:28:27 np0005478303 systemd[1]: Stopped Network Manager.
Oct  9 05:28:27 np0005478303 systemd[1]: Starting Network Manager...
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.0569] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e0b7fcdd-1586-415d-8058-c87bd65cc6fe)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.0570] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.0610] manager[0x5617b6044010]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 05:28:27 np0005478303 systemd[1]: Starting Hostname Service...
Oct  9 05:28:27 np0005478303 systemd[1]: Started Hostname Service.
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1159] hostname: hostname: using hostnamed
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1159] hostname: static hostname changed from (none) to "compute-1"
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1161] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1164] manager[0x5617b6044010]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1164] manager[0x5617b6044010]: rfkill: WWAN hardware radio set enabled
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1179] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1185] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1185] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1186] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1186] manager: Networking is enabled by state file
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1188] settings: Loaded settings plugin: keyfile (internal)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1190] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1206] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1214] dhcp: init: Using DHCP client 'internal'
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1216] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1219] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1224] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1229] device (lo): Activation: starting connection 'lo' (536fd1ae-144f-4da0-bdc6-b373fcef3967)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1233] device (eth0): carrier: link connected
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1236] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1239] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1239] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1243] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1247] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1250] device (eth1): carrier: link connected
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1253] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1256] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b) (indicated)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1256] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1259] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1263] device (eth1): Activation: starting connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct  9 05:28:27 np0005478303 systemd[1]: Started Network Manager.
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1282] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1295] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1297] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1298] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1299] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1300] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1301] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1307] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1308] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1313] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1315] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1317] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1321] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1324] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1329] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:27 np0005478303 systemd[1]: Starting Network Manager Wait Online...
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1341] dhcp4 (eth0): state changed new lease, address=192.168.26.45
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1345] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1408] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1410] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1411] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1415] device (lo): Activation: successful, device activated.
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1419] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1423] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  9 05:28:27 np0005478303 NetworkManager[51670]: <info>  [1760002107.1425] device (eth1): Activation: successful, device activated.
Oct  9 05:28:27 np0005478303 python3.9[51869]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:28 np0005478303 NetworkManager[51670]: <info>  [1760002108.2443] dhcp6 (eth0): state changed new lease, address=2001:db8::24
Oct  9 05:28:28 np0005478303 NetworkManager[51670]: <info>  [1760002108.2453] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:28:28 np0005478303 NetworkManager[51670]: <info>  [1760002108.2481] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:28:28 np0005478303 NetworkManager[51670]: <info>  [1760002108.2482] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:28:28 np0005478303 NetworkManager[51670]: <info>  [1760002108.2484] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:28:28 np0005478303 NetworkManager[51670]: <info>  [1760002108.2486] device (eth0): Activation: successful, device activated.
Oct  9 05:28:28 np0005478303 NetworkManager[51670]: <info>  [1760002108.2489] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 05:28:28 np0005478303 NetworkManager[51670]: <info>  [1760002108.2491] manager: startup complete
Oct  9 05:28:28 np0005478303 systemd[1]: Finished Network Manager Wait Online.
Oct  9 05:28:35 np0005478303 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:28:35 np0005478303 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:28:35 np0005478303 systemd[1]: Reloading.
Oct  9 05:28:35 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:35 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:35 np0005478303 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:28:35 np0005478303 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:28:35 np0005478303 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:28:35 np0005478303 systemd[1]: run-reaff25f39ddf443ab1489b7cd493c71a.service: Deactivated successfully.
Oct  9 05:28:36 np0005478303 python3.9[52352]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:28:37 np0005478303 python3.9[52504]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:37 np0005478303 python3.9[52658]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:38 np0005478303 python3.9[52810]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:38 np0005478303 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:28:38 np0005478303 python3.9[52964]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:39 np0005478303 python3.9[53116]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:39 np0005478303 python3.9[53268]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:28:40 np0005478303 python3.9[53391]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002119.2500415-648-22488250378680/.source _original_basename=.o9ikj2vc follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:40 np0005478303 python3.9[53543]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:41 np0005478303 python3.9[53695]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  9 05:28:41 np0005478303 python3.9[53847]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:43 np0005478303 python3.9[54274]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  9 05:28:44 np0005478303 ansible-async_wrapper.py[54449]: Invoked with j113311014090 300 /home/zuul/.ansible/tmp/ansible-tmp-1760002123.5835528-846-167239672596842/AnsiballZ_edpm_os_net_config.py _
Oct  9 05:28:44 np0005478303 ansible-async_wrapper.py[54452]: Starting module and watcher
Oct  9 05:28:44 np0005478303 ansible-async_wrapper.py[54452]: Start watching 54453 (300)
Oct  9 05:28:44 np0005478303 ansible-async_wrapper.py[54453]: Start module (54453)
Oct  9 05:28:44 np0005478303 ansible-async_wrapper.py[54449]: Return async_wrapper task started.
Oct  9 05:28:44 np0005478303 python3.9[54454]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  9 05:28:44 np0005478303 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  9 05:28:44 np0005478303 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  9 05:28:44 np0005478303 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  9 05:28:44 np0005478303 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  9 05:28:44 np0005478303 kernel: cfg80211: failed to load regulatory.db
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.6640] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.6659] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7024] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7026] audit: op="connection-add" uuid="4c59ca8f-34eb-40ef-9b98-d07f30800afa" name="br-ex-br" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7037] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7038] audit: op="connection-add" uuid="375645b6-33fc-4c37-833a-d9e158ec94ba" name="br-ex-port" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7046] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7048] audit: op="connection-add" uuid="14a02052-08d6-45e5-a948-6208b3559c65" name="eth1-port" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7057] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7058] audit: op="connection-add" uuid="b852a895-766c-43f9-a4ca-5df9c8f35de0" name="vlan20-port" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7066] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7067] audit: op="connection-add" uuid="b752045b-4ebf-405c-afa8-7f17ffa854e4" name="vlan21-port" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7075] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7076] audit: op="connection-add" uuid="bb0b9b67-5f18-4a32-a914-7d219a79010b" name="vlan22-port" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7084] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7085] audit: op="connection-add" uuid="9f7f603c-9def-45b8-9780-a4b31ecf01c3" name="vlan23-port" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7099] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.routes,ipv6.method,ipv6.may-fail,ipv6.dhcp-timeout,ipv6.addr-gen-mode,connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7112] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7113] audit: op="connection-add" uuid="01e3ccc4-9f50-4868-b5b2-19c55181f7c5" name="br-ex-if" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7133] audit: op="connection-update" uuid="66d16662-8a58-5f35-9b69-4caa739b599b" name="ci-private-network" args="ipv6.routes,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.routing-rules,ipv6.addr-gen-mode,ovs-interface.type,connection.slave-type,connection.controller,connection.master,connection.port-type,connection.timestamp,ipv4.never-default,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.routing-rules,ipv4.routes,ovs-external-ids.data" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7144] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7145] audit: op="connection-add" uuid="b23c7af3-65be-42e5-ab4c-395931000901" name="vlan20-if" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7156] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7157] audit: op="connection-add" uuid="d6b7cede-f2f7-4e62-ad08-de1c498575f7" name="vlan21-if" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7169] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7170] audit: op="connection-add" uuid="fd6ff51b-7be2-4aef-b30a-8784503157ec" name="vlan22-if" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7183] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7185] audit: op="connection-add" uuid="7dd63e55-1e2e-4430-8cd7-274623908d35" name="vlan23-if" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7193] audit: op="connection-delete" uuid="168464f5-1301-3221-87ae-be62d8e7a219" name="Wired connection 1" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7202] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7209] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7212] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (4c59ca8f-34eb-40ef-9b98-d07f30800afa)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7213] audit: op="connection-activate" uuid="4c59ca8f-34eb-40ef-9b98-d07f30800afa" name="br-ex-br" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7214] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7219] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7222] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (375645b6-33fc-4c37-833a-d9e158ec94ba)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7223] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7228] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7231] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (14a02052-08d6-45e5-a948-6208b3559c65)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7232] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7236] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7239] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (b852a895-766c-43f9-a4ca-5df9c8f35de0)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7241] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7245] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7249] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b752045b-4ebf-405c-afa8-7f17ffa854e4)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7250] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7255] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7258] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (bb0b9b67-5f18-4a32-a914-7d219a79010b)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7259] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7263] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7266] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (9f7f603c-9def-45b8-9780-a4b31ecf01c3)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7267] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7269] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7271] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7275] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7279] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7281] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (01e3ccc4-9f50-4868-b5b2-19c55181f7c5)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7282] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7285] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7286] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7288] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7289] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7296] device (eth1): disconnecting for new activation request.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7297] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7299] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7300] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7301] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7303] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7307] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7310] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (b23c7af3-65be-42e5-ab4c-395931000901)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7311] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7313] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7314] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7315] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7318] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7322] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7326] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (d6b7cede-f2f7-4e62-ad08-de1c498575f7)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7327] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7329] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7331] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7332] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7334] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7339] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7342] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (fd6ff51b-7be2-4aef-b30a-8784503157ec)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7344] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7346] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7348] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7350] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7352] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7356] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7359] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7dd63e55-1e2e-4430-8cd7-274623908d35)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7360] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7363] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7365] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7366] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7368] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7377] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.routes,ipv6.method,ipv6.may-fail,ipv6.addr-gen-mode,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7379] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7382] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7384] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 kernel: ovs-system: entered promiscuous mode
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7399] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7402] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7404] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7406] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7407] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7411] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7413] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7415] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 kernel: Timeout policy base is empty
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7416] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7418] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7420] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7422] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7423] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7426] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 systemd-udevd[54459]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7428] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7431] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7432] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7435] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7437] dhcp4 (eth0): canceled DHCP transaction
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7437] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7437] dhcp4 (eth0): state changed no lease
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7437] dhcp6 (eth0): canceled DHCP transaction
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7437] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7437] dhcp6 (eth0): state changed no lease
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7441] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  9 05:28:45 np0005478303 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7520] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7523] audit: op="device-reapply" interface="eth1" ifindex=3 pid=54455 uid=0 result="fail" reason="Device is not activated"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7530] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7551] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7553] dhcp4 (eth0): state changed new lease, address=192.168.26.45
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7590] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7596] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7598] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 05:28:45 np0005478303 kernel: br-ex: entered promiscuous mode
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7660] device (eth1): Activation: starting connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7662] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7663] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7663] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7664] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7665] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7665] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7666] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7667] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7670] device (eth1): disconnecting for new activation request.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7671] audit: op="connection-activate" uuid="66d16662-8a58-5f35-9b69-4caa739b599b" name="ci-private-network" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7673] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7675] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7678] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7680] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7683] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7685] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7687] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7689] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7691] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7693] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 kernel: vlan22: entered promiscuous mode
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7697] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7700] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7708] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7712] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 systemd-udevd[54461]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7750] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7754] device (eth1): Activation: starting connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7756] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54455 uid=0 result="success"
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7758] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7770] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7771] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7777] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7785] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 kernel: vlan20: entered promiscuous mode
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7811] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7831] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7837] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7845] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7870] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7871] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7873] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7876] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7881] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7884] device (eth1): Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7899] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7900] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7905] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 kernel: vlan21: entered promiscuous mode
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7952] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7958] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 kernel: vlan23: entered promiscuous mode
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7970] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7971] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.7974] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.8005] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478303 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.8021] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.8058] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.8063] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.8072] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.8116] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.8131] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.8147] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.8150] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478303 NetworkManager[51670]: <info>  [1760002125.8158] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:46 np0005478303 NetworkManager[51670]: <info>  [1760002126.9166] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.0387] checkpoint[0x5617b601b950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.0389] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.1618] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.1629] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.3274] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.4410] checkpoint[0x5617b601ba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.4415] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.6611] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.6622] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 python3.9[54815]: ansible-ansible.legacy.async_status Invoked with jid=j113311014090.54449 mode=status _async_dir=/root/.ansible_async
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.8152] audit: op="networking-control" arg="global-dns-configuration" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.8164] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.8169] audit: op="networking-control" arg="global-dns-configuration" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.8200] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.9366] checkpoint[0x5617b601baf0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Oct  9 05:28:47 np0005478303 NetworkManager[51670]: <info>  [1760002127.9371] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54455 uid=0 result="success"
Oct  9 05:28:47 np0005478303 ansible-async_wrapper.py[54453]: Module complete (54453)
Oct  9 05:28:49 np0005478303 ansible-async_wrapper.py[54452]: Done in kid B.
Oct  9 05:28:51 np0005478303 python3.9[54919]: ansible-ansible.legacy.async_status Invoked with jid=j113311014090.54449 mode=status _async_dir=/root/.ansible_async
Oct  9 05:28:51 np0005478303 python3.9[55019]: ansible-ansible.legacy.async_status Invoked with jid=j113311014090.54449 mode=cleanup _async_dir=/root/.ansible_async
Oct  9 05:28:52 np0005478303 python3.9[55171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:28:52 np0005478303 python3.9[55294]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002131.718981-927-155029127568123/.source.returncode _original_basename=.fjkzvrx6 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:52 np0005478303 python3.9[55446]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:28:53 np0005478303 python3.9[55569]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002132.6500363-975-96852925318303/.source.cfg _original_basename=.exvtquan follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:53 np0005478303 python3.9[55721]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:28:53 np0005478303 systemd[1]: Reloading Network Manager...
Oct  9 05:28:54 np0005478303 NetworkManager[51670]: <info>  [1760002134.0057] audit: op="reload" arg="0" pid=55725 uid=0 result="success"
Oct  9 05:28:54 np0005478303 NetworkManager[51670]: <info>  [1760002134.0062] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  9 05:28:54 np0005478303 NetworkManager[51670]: <info>  [1760002134.0063] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  9 05:28:54 np0005478303 systemd[1]: Reloaded Network Manager.
Oct  9 05:28:54 np0005478303 systemd[1]: session-10.scope: Deactivated successfully.
Oct  9 05:28:54 np0005478303 systemd[1]: session-10.scope: Consumed 35.998s CPU time.
Oct  9 05:28:54 np0005478303 systemd-logind[745]: Session 10 logged out. Waiting for processes to exit.
Oct  9 05:28:54 np0005478303 systemd-logind[745]: Removed session 10.
Oct  9 05:28:57 np0005478303 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:28:59 np0005478303 systemd-logind[745]: New session 11 of user zuul.
Oct  9 05:28:59 np0005478303 systemd[1]: Started Session 11 of User zuul.
Oct  9 05:29:00 np0005478303 python3.9[55911]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:00 np0005478303 python3.9[56065]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:01 np0005478303 python3.9[56259]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:29:01 np0005478303 systemd[1]: session-11.scope: Deactivated successfully.
Oct  9 05:29:01 np0005478303 systemd[1]: session-11.scope: Consumed 1.626s CPU time.
Oct  9 05:29:01 np0005478303 systemd-logind[745]: Session 11 logged out. Waiting for processes to exit.
Oct  9 05:29:01 np0005478303 systemd-logind[745]: Removed session 11.
Oct  9 05:29:04 np0005478303 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:29:06 np0005478303 systemd-logind[745]: New session 12 of user zuul.
Oct  9 05:29:06 np0005478303 systemd[1]: Started Session 12 of User zuul.
Oct  9 05:29:07 np0005478303 python3.9[56441]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:08 np0005478303 python3.9[56595]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:09 np0005478303 python3.9[56751]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:09 np0005478303 python3.9[56835]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:29:11 np0005478303 python3.9[56989]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:12 np0005478303 python3.9[57184]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:12 np0005478303 python3.9[57336]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:29:12 np0005478303 systemd[1]: var-lib-containers-storage-overlay-compat1103098382-merged.mount: Deactivated successfully.
Oct  9 05:29:12 np0005478303 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2999359357-merged.mount: Deactivated successfully.
Oct  9 05:29:12 np0005478303 podman[57337]: 2025-10-09 09:29:12.835502627 +0000 UTC m=+0.025484245 system refresh
Oct  9 05:29:13 np0005478303 python3.9[57497]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:13 np0005478303 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 05:29:13 np0005478303 python3.9[57621]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002152.967703-198-156263569416248/.source.json follow=False _original_basename=podman_network_config.j2 checksum=838cf5dea57a33809087d2ce09f6c8f588dff60c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:14 np0005478303 python3.9[57773]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:14 np0005478303 python3.9[57896]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002154.045531-243-169813980264622/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:15 np0005478303 python3.9[58048]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:15 np0005478303 python3.9[58200]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:16 np0005478303 python3.9[58352]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:16 np0005478303 python3.9[58504]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:17 np0005478303 python3.9[58657]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:29:18 np0005478303 python3.9[58810]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:19 np0005478303 python3.9[58964]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:29:19 np0005478303 python3.9[59116]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:29:20 np0005478303 python3.9[59268]: ansible-service_facts Invoked
Oct  9 05:29:20 np0005478303 network[59285]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 05:29:20 np0005478303 network[59286]: 'network-scripts' will be removed from distribution in near future.
Oct  9 05:29:20 np0005478303 network[59287]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 05:29:23 np0005478303 python3.9[59741]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:29:25 np0005478303 python3.9[59894]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  9 05:29:26 np0005478303 python3.9[60046]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:27 np0005478303 python3.9[60171]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002166.547668-639-95375382896598/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:28 np0005478303 python3.9[60325]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:28 np0005478303 python3.9[60450]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002168.0733242-686-161176752536796/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:30 np0005478303 python3.9[60604]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:31 np0005478303 python3.9[60758]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:32 np0005478303 python3.9[60842]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:29:33 np0005478303 python3.9[60996]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:34 np0005478303 python3.9[61080]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:29:34 np0005478303 chronyd[753]: chronyd exiting
Oct  9 05:29:34 np0005478303 systemd[1]: Stopping NTP client/server...
Oct  9 05:29:34 np0005478303 systemd[1]: chronyd.service: Deactivated successfully.
Oct  9 05:29:34 np0005478303 systemd[1]: Stopped NTP client/server.
Oct  9 05:29:34 np0005478303 systemd[1]: Starting NTP client/server...
Oct  9 05:29:34 np0005478303 chronyd[61088]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 05:29:34 np0005478303 chronyd[61088]: Frequency -10.193 +/- 3.849 ppm read from /var/lib/chrony/drift
Oct  9 05:29:34 np0005478303 chronyd[61088]: Loaded seccomp filter (level 2)
Oct  9 05:29:34 np0005478303 systemd[1]: Started NTP client/server.
Oct  9 05:29:34 np0005478303 systemd[1]: session-12.scope: Deactivated successfully.
Oct  9 05:29:34 np0005478303 systemd[1]: session-12.scope: Consumed 17.484s CPU time.
Oct  9 05:29:34 np0005478303 systemd-logind[745]: Session 12 logged out. Waiting for processes to exit.
Oct  9 05:29:34 np0005478303 systemd-logind[745]: Removed session 12.
Oct  9 05:29:39 np0005478303 systemd-logind[745]: New session 13 of user zuul.
Oct  9 05:29:39 np0005478303 systemd[1]: Started Session 13 of User zuul.
Oct  9 05:29:40 np0005478303 python3.9[61269]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:40 np0005478303 python3.9[61421]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:41 np0005478303 python3.9[61544]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002180.342097-63-111565372342612/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:41 np0005478303 systemd[1]: session-13.scope: Deactivated successfully.
Oct  9 05:29:41 np0005478303 systemd[1]: session-13.scope: Consumed 1.080s CPU time.
Oct  9 05:29:41 np0005478303 systemd-logind[745]: Session 13 logged out. Waiting for processes to exit.
Oct  9 05:29:41 np0005478303 systemd-logind[745]: Removed session 13.
Oct  9 05:29:47 np0005478303 systemd-logind[745]: New session 14 of user zuul.
Oct  9 05:29:47 np0005478303 systemd[1]: Started Session 14 of User zuul.
Oct  9 05:29:48 np0005478303 python3.9[61722]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:48 np0005478303 python3.9[61878]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:49 np0005478303 python3.9[62053]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:50 np0005478303 python3.9[62176]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760002189.0929294-84-104262897768053/.source.json _original_basename=.w56loh7s follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:51 np0005478303 python3.9[62328]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:51 np0005478303 python3.9[62451]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002190.898138-153-187578948204186/.source _original_basename=.st0mwxw_ follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:52 np0005478303 python3.9[62603]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:52 np0005478303 python3.9[62755]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:53 np0005478303 python3.9[62878]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002192.3771021-225-201653524748568/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:53 np0005478303 python3.9[63030]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:54 np0005478303 python3.9[63153]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002193.2915664-225-277012730560476/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:54 np0005478303 python3.9[63305]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:55 np0005478303 python3.9[63457]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:55 np0005478303 python3.9[63580]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002194.8079288-336-101907919345395/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:56 np0005478303 python3.9[63732]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:56 np0005478303 python3.9[63855]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002195.6984708-381-162993624278149/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:57 np0005478303 python3.9[64007]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:29:57 np0005478303 systemd[1]: Reloading.
Oct  9 05:29:57 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:29:57 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:29:57 np0005478303 systemd[1]: Reloading.
Oct  9 05:29:57 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:29:57 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:29:57 np0005478303 systemd[1]: Starting EDPM Container Shutdown...
Oct  9 05:29:57 np0005478303 systemd[1]: Finished EDPM Container Shutdown.
Oct  9 05:29:58 np0005478303 python3.9[64234]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:58 np0005478303 python3.9[64357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002197.8499703-450-17314440282685/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:59 np0005478303 python3.9[64509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:59 np0005478303 python3.9[64632]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002198.791597-495-222752249286562/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:00 np0005478303 python3.9[64784]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:00 np0005478303 systemd[1]: Reloading.
Oct  9 05:30:00 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:00 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:00 np0005478303 systemd[1]: Reloading.
Oct  9 05:30:00 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:00 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:00 np0005478303 systemd[1]: Starting Create netns directory...
Oct  9 05:30:00 np0005478303 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 05:30:00 np0005478303 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 05:30:00 np0005478303 systemd[1]: Finished Create netns directory.
Oct  9 05:30:01 np0005478303 python3.9[65010]: ansible-ansible.builtin.service_facts Invoked
Oct  9 05:30:01 np0005478303 network[65027]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 05:30:01 np0005478303 network[65028]: 'network-scripts' will be removed from distribution in near future.
Oct  9 05:30:01 np0005478303 network[65029]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 05:30:03 np0005478303 python3.9[65293]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:04 np0005478303 systemd[1]: Reloading.
Oct  9 05:30:04 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:04 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:04 np0005478303 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  9 05:30:04 np0005478303 iptables.init[65332]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  9 05:30:04 np0005478303 iptables.init[65332]: iptables: Flushing firewall rules: [  OK  ]
Oct  9 05:30:04 np0005478303 systemd[1]: iptables.service: Deactivated successfully.
Oct  9 05:30:04 np0005478303 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  9 05:30:05 np0005478303 python3.9[65528]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:05 np0005478303 python3.9[65682]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:05 np0005478303 systemd[1]: Reloading.
Oct  9 05:30:05 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:05 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:05 np0005478303 systemd[1]: Starting Netfilter Tables...
Oct  9 05:30:05 np0005478303 systemd[1]: Finished Netfilter Tables.
Oct  9 05:30:06 np0005478303 python3.9[65873]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:30:07 np0005478303 python3.9[66026]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:07 np0005478303 python3.9[66151]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002207.103515-702-6978399656711/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:08 np0005478303 python3.9[66302]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:30:33 np0005478303 systemd[1]: session-14.scope: Deactivated successfully.
Oct  9 05:30:33 np0005478303 systemd[1]: session-14.scope: Consumed 13.867s CPU time.
Oct  9 05:30:33 np0005478303 systemd-logind[745]: Session 14 logged out. Waiting for processes to exit.
Oct  9 05:30:33 np0005478303 systemd-logind[745]: Removed session 14.
Oct  9 05:30:45 np0005478303 systemd-logind[745]: New session 15 of user zuul.
Oct  9 05:30:45 np0005478303 systemd[1]: Started Session 15 of User zuul.
Oct  9 05:30:46 np0005478303 python3.9[66495]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:30:47 np0005478303 python3.9[66651]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:47 np0005478303 python3.9[66826]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:48 np0005478303 python3.9[66904]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.6zi3wpmj recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:49 np0005478303 python3.9[67056]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:49 np0005478303 python3.9[67134]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.720ev60v recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:50 np0005478303 python3.9[67286]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:30:50 np0005478303 python3.9[67438]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:50 np0005478303 python3.9[67516]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:30:51 np0005478303 python3.9[67668]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:51 np0005478303 python3.9[67746]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:30:52 np0005478303 python3.9[67898]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:52 np0005478303 python3.9[68050]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:53 np0005478303 python3.9[68128]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:53 np0005478303 python3.9[68280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:54 np0005478303 python3.9[68358]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:55 np0005478303 python3.9[68510]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:55 np0005478303 systemd[1]: Reloading.
Oct  9 05:30:55 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:55 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:55 np0005478303 python3.9[68699]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:56 np0005478303 python3.9[68777]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:56 np0005478303 python3.9[68929]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:57 np0005478303 python3.9[69007]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:57 np0005478303 python3.9[69159]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:57 np0005478303 systemd[1]: Reloading.
Oct  9 05:30:57 np0005478303 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:57 np0005478303 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:57 np0005478303 systemd[1]: Starting Create netns directory...
Oct  9 05:30:57 np0005478303 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 05:30:57 np0005478303 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 05:30:57 np0005478303 systemd[1]: Finished Create netns directory.
Oct  9 05:30:58 np0005478303 python3.9[69350]: ansible-ansible.builtin.service_facts Invoked
Oct  9 05:30:58 np0005478303 network[69367]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 05:30:58 np0005478303 network[69368]: 'network-scripts' will be removed from distribution in near future.
Oct  9 05:30:58 np0005478303 network[69369]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 05:31:01 np0005478303 python3.9[69632]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:01 np0005478303 python3.9[69710]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:02 np0005478303 python3.9[69862]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:02 np0005478303 python3.9[70014]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:03 np0005478303 python3.9[70137]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002262.4923306-609-162281521054372/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:04 np0005478303 python3.9[70289]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  9 05:31:04 np0005478303 systemd[1]: Starting Time & Date Service...
Oct  9 05:31:04 np0005478303 systemd[1]: Started Time & Date Service.
Oct  9 05:31:05 np0005478303 python3.9[70445]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:05 np0005478303 python3.9[70597]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:05 np0005478303 python3.9[70720]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002265.232838-714-114052653268484/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:06 np0005478303 python3.9[70872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:06 np0005478303 python3.9[70995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002266.1356766-759-29435955994270/.source.yaml _original_basename=.g97c3yz6 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:07 np0005478303 python3.9[71147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:07 np0005478303 python3.9[71270]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002267.110701-804-108190914921512/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:08 np0005478303 python3.9[71422]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:09 np0005478303 python3.9[71575]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:09 np0005478303 python3[71728]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 05:31:10 np0005478303 python3.9[71880]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:10 np0005478303 python3.9[72003]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002269.9569027-921-67068410058025/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:11 np0005478303 python3.9[72155]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:11 np0005478303 python3.9[72278]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002270.8870041-966-234658976653885/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:12 np0005478303 python3.9[72430]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:12 np0005478303 python3.9[72553]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002271.77094-1011-21612058230406/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:13 np0005478303 python3.9[72705]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:13 np0005478303 python3.9[72828]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002272.6532707-1056-242403021513266/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:13 np0005478303 python3.9[72980]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:14 np0005478303 python3.9[73103]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002273.5357916-1101-271533004259617/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:14 np0005478303 python3.9[73255]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:15 np0005478303 python3.9[73407]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:16 np0005478303 python3.9[73566]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:16 np0005478303 python3.9[73719]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:17 np0005478303 python3.9[73871]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:17 np0005478303 python3.9[74023]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 05:31:18 np0005478303 python3.9[74176]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 05:31:18 np0005478303 systemd[1]: session-15.scope: Deactivated successfully.
Oct  9 05:31:18 np0005478303 systemd[1]: session-15.scope: Consumed 21.268s CPU time.
Oct  9 05:31:18 np0005478303 systemd-logind[745]: Session 15 logged out. Waiting for processes to exit.
Oct  9 05:31:18 np0005478303 systemd-logind[745]: Removed session 15.
Oct  9 05:31:23 np0005478303 systemd-logind[745]: New session 16 of user zuul.
Oct  9 05:31:23 np0005478303 systemd[1]: Started Session 16 of User zuul.
Oct  9 05:31:23 np0005478303 python3.9[74357]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  9 05:31:24 np0005478303 python3.9[74509]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:25 np0005478303 python3.9[74661]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:25 np0005478303 python3.9[74813]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKE7qnQSdbsdsOaGWRokEAHfuZHqF4BkfkIlbsIxi6+FzXfmziMPrsg1PoVUBFOzaP55y6aRtUEaXoCsB+KxPGXhHnh3IdEYTUa5EvJs6/mUlEqIwltt8CLNKUrDV6N38V1v5gaRPIAI5iTwtbap14q+0iDF8MVi8MPKlkqoL/+Z49sJ4HqR31EZpD4cWKso/dkKZQSuVQg+TgJ3bnUKIRYPDS7fjVuZpr0KMyU+v4wjBKXvles8lctvRXdfpY2/33XtBG2af+p/+5mg47b5ylWC3wISLO590WzC4X2T0Pv1a6I9O/Dt3V8xyTfzbqi4ia9/kwNBJg1GGqNBssdedHK3AZDOTSd9U+/C1R9oBDXZ7nSo3hIzMQvrm5DXkthix56gd3x9MrMMzc+wTlFtlm2XwpMg7PtdxMZK++rIfPVxzKXBBQsdDd0W3cbam616N/XERaDJKIUqnPe5sE1qhpaFt8aNtwg+buZpYK5ubLbuJZpASgSC6dIuDsEIk6Af8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtxusJG2g5S2RnWLxtcDjdiTuv+VWibld9MVjIgPUzn#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG1pQwHgci56FauRELJKl6O8ntBVH1APLVaVNPCodlG/V+A+h79tYrSqi3QKycc18niRc7Eiq8wWQ8VbX+OhkmY=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEdAe+aHzafP9dhAtdIAtOm2sC12803SCpA/3rl1ydGqAiReivZh0j/TO2wBzoqsan7nzM7eG4TWSpqK+0ZBgBjrUjB9Cj1eCLSLOLFpIUpLcs70zpiXFEg4VCxifit+r7hVmAjbLpb7lUOEBeuKAC+NijlzOD2XrC+yd3AhBkIuX/kEOqNS457QburXRcER973lXO7bXpB0owCrgGAzOsy1i7FT6Zz4mSB7l2Iy2drh0BXBPs+laJ9chzaIYm3t6/xdGegDzZd9R0R/aKxaO2CGff8by/bJ8Ga/DZNziOBiuIImaU3kBJc76SWraZeoiOMwDTosKuZfFadJWywRHIP1xUSkKdLGnB0MzpGtOhcIWX642g/WIM4+Y078U5nwtvOcNHpA/uT9uRc7nBCEzPpJVHtyVbh0kQ9x86pCj83Ph6ZZ1RPGolhJ6oztdGyl5QMj/rkG45+H83p9c18d5vzsZzrcKaYtBEg3BJ80PfCqFw5Al9hHq/55Yd0D5PiK8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN+sxaZ1V99vc+E5ar8KEv4Hqy68kJM/buHn1/XxovLr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDc5CVbyus+PfQGnwFQkfkACIJgIJPRc/fJ1ooz9D/2T/S79sUKftWyZ1JOurJ8lQdLc+LgRGezTzhfuY3R3F6E=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCow+01n6Hl7e4y/xRpTIYbwm1BUam3jmz5ScpeEvosFn7TfszdHV/Do5gTioKon9F6x7Kn2fhkWobIt7rTveNaK0lE2p35tJDQJQ5zYJD3N4aWHdvfaigYEXYaH3OOpmqEhRw/IyxGzW1MS8OfGUNyziUYt99LLYhcEkDneuZnPOI2444OzzU0pYxCtaVSevz9aDR2yi9BWKNIP8iMTNqu9UpE9IaOANEDrZu7gbGMBTDiR1lYzo1peJrtAa/cpTF9DoFnddTbpOMLjd6HaRrnifcc9fP1YtxWn8T1ldTjecUUCp2yo6ycdOUdBiJG9yWw1gI7SXYjeHJbX/1QS6HWd5DWxJFbSf0zP5d5BWyDf5+TFu1/gImUA0HT8WOYb4tm1QH1NAThcRLvtUFg32CcbqOnUyAxW0wDeGoLCW7EERN9OKr11fwlYjdyW/TbqYWRn0J2WhZa4OoZ/C4m9ug6PP7SEo9wXLqN9t4eArVkbeTemzPigVRqNrD2eywEU4k=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCkglmiqZQwqqMItgWA6O04td1K/U4vAgm36NE9rj3U#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLD7v/1C4ThvDcQi8c4DTsjkszkaGHBX0ZNWy5MwKVH3Qt7bVSlXkD8SB3/nhOUlBIzdAK/JQpzVyqfy+61YZMk=#012 create=True mode=0644 path=/tmp/ansible.dg676_5s state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:26 np0005478303 python3.9[74965]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.dg676_5s' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:27 np0005478303 python3.9[75119]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.dg676_5s state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:27 np0005478303 systemd[1]: session-16.scope: Deactivated successfully.
Oct  9 05:31:27 np0005478303 systemd[1]: session-16.scope: Consumed 2.224s CPU time.
Oct  9 05:31:27 np0005478303 systemd-logind[745]: Session 16 logged out. Waiting for processes to exit.
Oct  9 05:31:27 np0005478303 systemd-logind[745]: Removed session 16.
Oct  9 05:31:32 np0005478303 systemd-logind[745]: New session 17 of user zuul.
Oct  9 05:31:32 np0005478303 systemd[1]: Started Session 17 of User zuul.
Oct  9 05:31:33 np0005478303 python3.9[75297]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:34 np0005478303 python3.9[75453]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  9 05:31:34 np0005478303 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 05:31:34 np0005478303 python3.9[75607]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:31:35 np0005478303 python3.9[75763]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:35 np0005478303 python3.9[75916]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:36 np0005478303 python3.9[76070]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:37 np0005478303 python3.9[76226]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:37 np0005478303 systemd[1]: session-17.scope: Deactivated successfully.
Oct  9 05:31:37 np0005478303 systemd[1]: session-17.scope: Consumed 3.138s CPU time.
Oct  9 05:31:37 np0005478303 systemd-logind[745]: Session 17 logged out. Waiting for processes to exit.
Oct  9 05:31:37 np0005478303 systemd-logind[745]: Removed session 17.
Oct  9 05:31:42 np0005478303 systemd-logind[745]: New session 18 of user zuul.
Oct  9 05:31:42 np0005478303 systemd[1]: Started Session 18 of User zuul.
Oct  9 05:31:43 np0005478303 python3.9[76404]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:43 np0005478303 chronyd[61088]: Selected source 65.182.224.60 (pool.ntp.org)
Oct  9 05:31:44 np0005478303 python3.9[76560]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:31:44 np0005478303 python3.9[76644]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 05:31:46 np0005478303 python3.9[76795]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:47 np0005478303 python3.9[76948]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:47 np0005478303 python3.9[77100]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:48 np0005478303 python3.9[77252]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Core libraries or services have been updated since boot-up:#012  * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:48 np0005478303 python3.9[77402]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 05:31:49 np0005478303 python3.9[77552]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:49 np0005478303 python3.9[77702]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:50 np0005478303 python3.9[77854]: ansible-ansible.legacy.setup Invoked with gather_subset=['min'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:50 np0005478303 python3.9[77967]: ansible-ansible.legacy.find Invoked with paths=['/sbin', '/bin', '/usr/sbin', '/usr/bin', '/usr/local/sbin'] patterns=['shutdown'] file_type=any read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:31:57 compute-1 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  9 09:31:57 compute-1 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 09:31:57 compute-1 kernel: BIOS-provided physical RAM map:
Oct  9 09:31:57 compute-1 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  9 09:31:57 compute-1 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  9 09:31:57 compute-1 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  9 09:31:57 compute-1 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Oct  9 09:31:57 compute-1 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Oct  9 09:31:57 compute-1 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Oct  9 09:31:57 compute-1 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Oct  9 09:31:57 compute-1 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  9 09:31:57 compute-1 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  9 09:31:57 compute-1 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Oct  9 09:31:57 compute-1 kernel: NX (Execute Disable) protection: active
Oct  9 09:31:57 compute-1 kernel: APIC: Static calls initialized
Oct  9 09:31:57 compute-1 kernel: SMBIOS 2.8 present.
Oct  9 09:31:57 compute-1 kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Oct  9 09:31:57 compute-1 kernel: Hypervisor detected: KVM
Oct  9 09:31:57 compute-1 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  9 09:31:57 compute-1 kernel: kvm-clock: using sched offset of 1898363775067 cycles
Oct  9 09:31:57 compute-1 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  9 09:31:57 compute-1 kernel: tsc: Detected 2445.406 MHz processor
Oct  9 09:31:57 compute-1 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Oct  9 09:31:57 compute-1 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  9 09:31:57 compute-1 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  9 09:31:57 compute-1 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Oct  9 09:31:57 compute-1 kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Oct  9 09:31:57 compute-1 kernel: Using GB pages for direct mapping
Oct  9 09:31:57 compute-1 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  9 09:31:57 compute-1 kernel: ACPI: Early table checksum verification disabled
Oct  9 09:31:57 compute-1 kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Oct  9 09:31:57 compute-1 kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:57 compute-1 kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:57 compute-1 kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:57 compute-1 kernel: ACPI: FACS 0x000000007FFDFC80 000040
Oct  9 09:31:57 compute-1 kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:57 compute-1 kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:57 compute-1 kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:57 compute-1 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Oct  9 09:31:57 compute-1 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Oct  9 09:31:57 compute-1 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Oct  9 09:31:57 compute-1 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Oct  9 09:31:57 compute-1 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Oct  9 09:31:57 compute-1 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Oct  9 09:31:57 compute-1 kernel: No NUMA configuration found
Oct  9 09:31:57 compute-1 kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Oct  9 09:31:57 compute-1 kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Oct  9 09:31:57 compute-1 kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Oct  9 09:31:57 compute-1 kernel: Zone ranges:
Oct  9 09:31:57 compute-1 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  9 09:31:57 compute-1 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  9 09:31:57 compute-1 kernel:  Normal   [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 09:31:57 compute-1 kernel:  Device   empty
Oct  9 09:31:57 compute-1 kernel: Movable zone start for each node
Oct  9 09:31:57 compute-1 kernel: Early memory node ranges
Oct  9 09:31:57 compute-1 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  9 09:31:57 compute-1 kernel:  node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Oct  9 09:31:57 compute-1 kernel:  node   0: [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 09:31:57 compute-1 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Oct  9 09:31:57 compute-1 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  9 09:31:57 compute-1 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  9 09:31:57 compute-1 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  9 09:31:57 compute-1 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  9 09:31:57 compute-1 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  9 09:31:57 compute-1 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  9 09:31:57 compute-1 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  9 09:31:57 compute-1 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  9 09:31:57 compute-1 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  9 09:31:57 compute-1 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  9 09:31:57 compute-1 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  9 09:31:57 compute-1 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  9 09:31:57 compute-1 kernel: TSC deadline timer available
Oct  9 09:31:57 compute-1 kernel: CPU topo: Max. logical packages:   4
Oct  9 09:31:57 compute-1 kernel: CPU topo: Max. logical dies:       4
Oct  9 09:31:57 compute-1 kernel: CPU topo: Max. dies per package:   1
Oct  9 09:31:57 compute-1 kernel: CPU topo: Max. threads per core:   1
Oct  9 09:31:57 compute-1 kernel: CPU topo: Num. cores per package:     1
Oct  9 09:31:57 compute-1 kernel: CPU topo: Num. threads per package:   1
Oct  9 09:31:57 compute-1 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Oct  9 09:31:57 compute-1 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  9 09:31:57 compute-1 kernel: kvm-guest: KVM setup pv remote TLB flush
Oct  9 09:31:57 compute-1 kernel: kvm-guest: setup PV sched yield
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  9 09:31:57 compute-1 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  9 09:31:57 compute-1 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Oct  9 09:31:57 compute-1 kernel: Booting paravirtualized kernel on KVM
Oct  9 09:31:57 compute-1 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  9 09:31:57 compute-1 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Oct  9 09:31:57 compute-1 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Oct  9 09:31:57 compute-1 kernel: kvm-guest: PV spinlocks enabled
Oct  9 09:31:57 compute-1 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 09:31:57 compute-1 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  9 09:31:57 compute-1 kernel: random: crng init done
Oct  9 09:31:57 compute-1 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: Fallback order for Node 0: 0 
Oct  9 09:31:57 compute-1 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  9 09:31:57 compute-1 kernel: Policy zone: Normal
Oct  9 09:31:57 compute-1 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  9 09:31:57 compute-1 kernel: software IO TLB: area num 4.
Oct  9 09:31:57 compute-1 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Oct  9 09:31:57 compute-1 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  9 09:31:57 compute-1 kernel: ftrace: allocated 193 pages with 3 groups
Oct  9 09:31:57 compute-1 kernel: Dynamic Preempt: voluntary
Oct  9 09:31:57 compute-1 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  9 09:31:57 compute-1 kernel: rcu: #011RCU event tracing is enabled.
Oct  9 09:31:57 compute-1 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Oct  9 09:31:57 compute-1 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  9 09:31:57 compute-1 kernel: #011Rude variant of Tasks RCU enabled.
Oct  9 09:31:57 compute-1 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  9 09:31:57 compute-1 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  9 09:31:57 compute-1 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Oct  9 09:31:57 compute-1 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 09:31:57 compute-1 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 09:31:57 compute-1 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 09:31:57 compute-1 kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Oct  9 09:31:57 compute-1 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  9 09:31:57 compute-1 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  9 09:31:57 compute-1 kernel: Console: colour VGA+ 80x25
Oct  9 09:31:57 compute-1 kernel: printk: console [ttyS0] enabled
Oct  9 09:31:57 compute-1 kernel: ACPI: Core revision 20230331
Oct  9 09:31:57 compute-1 kernel: APIC: Switch to symmetric I/O mode setup
Oct  9 09:31:57 compute-1 kernel: x2apic enabled
Oct  9 09:31:57 compute-1 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  9 09:31:57 compute-1 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Oct  9 09:31:57 compute-1 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Oct  9 09:31:57 compute-1 kernel: kvm-guest: setup PV IPIs
Oct  9 09:31:57 compute-1 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  9 09:31:57 compute-1 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Oct  9 09:31:57 compute-1 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  9 09:31:57 compute-1 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  9 09:31:57 compute-1 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  9 09:31:57 compute-1 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  9 09:31:57 compute-1 kernel: Spectre V2 : Mitigation: Retpolines
Oct  9 09:31:57 compute-1 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  9 09:31:57 compute-1 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Oct  9 09:31:57 compute-1 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  9 09:31:57 compute-1 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  9 09:31:57 compute-1 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  9 09:31:57 compute-1 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  9 09:31:57 compute-1 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  9 09:31:57 compute-1 kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Oct  9 09:31:57 compute-1 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  9 09:31:57 compute-1 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  9 09:31:57 compute-1 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  9 09:31:57 compute-1 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Oct  9 09:31:57 compute-1 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  9 09:31:57 compute-1 kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Oct  9 09:31:57 compute-1 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Oct  9 09:31:57 compute-1 kernel: Freeing SMP alternatives memory: 40K
Oct  9 09:31:57 compute-1 kernel: pid_max: default: 32768 minimum: 301
Oct  9 09:31:57 compute-1 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  9 09:31:57 compute-1 kernel: landlock: Up and running.
Oct  9 09:31:57 compute-1 kernel: Yama: becoming mindful.
Oct  9 09:31:57 compute-1 kernel: SELinux:  Initializing.
Oct  9 09:31:57 compute-1 kernel: LSM support for eBPF active
Oct  9 09:31:57 compute-1 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Oct  9 09:31:57 compute-1 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  9 09:31:57 compute-1 kernel: ... version:                0
Oct  9 09:31:57 compute-1 kernel: ... bit width:              48
Oct  9 09:31:57 compute-1 kernel: ... generic registers:      6
Oct  9 09:31:57 compute-1 kernel: ... value mask:             0000ffffffffffff
Oct  9 09:31:57 compute-1 kernel: ... max period:             00007fffffffffff
Oct  9 09:31:57 compute-1 kernel: ... fixed-purpose events:   0
Oct  9 09:31:57 compute-1 kernel: ... event mask:             000000000000003f
Oct  9 09:31:57 compute-1 kernel: signal: max sigframe size: 3376
Oct  9 09:31:57 compute-1 kernel: rcu: Hierarchical SRCU implementation.
Oct  9 09:31:57 compute-1 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  9 09:31:57 compute-1 kernel: smp: Bringing up secondary CPUs ...
Oct  9 09:31:57 compute-1 kernel: smpboot: x86: Booting SMP configuration:
Oct  9 09:31:57 compute-1 kernel: .... node  #0, CPUs:      #1 #2 #3
Oct  9 09:31:57 compute-1 kernel: smp: Brought up 1 node, 4 CPUs
Oct  9 09:31:57 compute-1 kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Oct  9 09:31:57 compute-1 kernel: node 0 deferred pages initialised in 18ms
Oct  9 09:31:57 compute-1 kernel: Memory: 7768032K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 615456K reserved, 0K cma-reserved)
Oct  9 09:31:57 compute-1 kernel: devtmpfs: initialized
Oct  9 09:31:57 compute-1 kernel: x86/mm: Memory block size: 128MB
Oct  9 09:31:57 compute-1 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  9 09:31:57 compute-1 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: pinctrl core: initialized pinctrl subsystem
Oct  9 09:31:57 compute-1 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  9 09:31:57 compute-1 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  9 09:31:57 compute-1 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  9 09:31:57 compute-1 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  9 09:31:57 compute-1 kernel: audit: initializing netlink subsys (disabled)
Oct  9 09:31:57 compute-1 kernel: audit: type=2000 audit(1760002315.499:1): state=initialized audit_enabled=0 res=1
Oct  9 09:31:57 compute-1 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  9 09:31:57 compute-1 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  9 09:31:57 compute-1 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  9 09:31:57 compute-1 kernel: cpuidle: using governor menu
Oct  9 09:31:57 compute-1 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  9 09:31:57 compute-1 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Oct  9 09:31:57 compute-1 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Oct  9 09:31:57 compute-1 kernel: PCI: Using configuration type 1 for base access
Oct  9 09:31:57 compute-1 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  9 09:31:57 compute-1 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  9 09:31:57 compute-1 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  9 09:31:57 compute-1 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  9 09:31:57 compute-1 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  9 09:31:57 compute-1 kernel: Demotion targets for Node 0: null
Oct  9 09:31:57 compute-1 kernel: cryptd: max_cpu_qlen set to 1000
Oct  9 09:31:57 compute-1 kernel: ACPI: Added _OSI(Module Device)
Oct  9 09:31:57 compute-1 kernel: ACPI: Added _OSI(Processor Device)
Oct  9 09:31:57 compute-1 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  9 09:31:57 compute-1 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  9 09:31:57 compute-1 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  9 09:31:57 compute-1 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  9 09:31:57 compute-1 kernel: ACPI: Interpreter enabled
Oct  9 09:31:57 compute-1 kernel: ACPI: PM: (supports S0 S5)
Oct  9 09:31:57 compute-1 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  9 09:31:57 compute-1 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  9 09:31:57 compute-1 kernel: PCI: Using E820 reservations for host bridge windows
Oct  9 09:31:57 compute-1 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  9 09:31:57 compute-1 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  9 09:31:57 compute-1 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Oct  9 09:31:57 compute-1 kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Oct  9 09:31:57 compute-1 kernel: PCI host bridge to bus 0000:00
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:02: extended config space not accessible
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [1] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [2] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [3] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [4] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [5] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [6] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [7] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [8] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [9] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [10] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [11] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [12] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [13] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [14] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [15] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [16] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [17] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [18] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [19] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [20] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [21] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [22] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [23] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [24] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [25] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [26] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [27] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [28] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [29] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [30] registered
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [31] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-2] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-3] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-4] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-5] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-6] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 09:31:57 compute-1 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-7] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-8] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-9] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-10] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-11] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-12] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-13] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-14] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-15] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-16] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 09:31:57 compute-1 kernel: acpiphp: Slot [0-17] registered
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Oct  9 09:31:57 compute-1 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Oct  9 09:31:57 compute-1 kernel: iommu: Default domain type: Translated
Oct  9 09:31:57 compute-1 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  9 09:31:57 compute-1 kernel: SCSI subsystem initialized
Oct  9 09:31:57 compute-1 kernel: ACPI: bus type USB registered
Oct  9 09:31:57 compute-1 kernel: usbcore: registered new interface driver usbfs
Oct  9 09:31:57 compute-1 kernel: usbcore: registered new interface driver hub
Oct  9 09:31:57 compute-1 kernel: usbcore: registered new device driver usb
Oct  9 09:31:57 compute-1 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  9 09:31:57 compute-1 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  9 09:31:57 compute-1 kernel: PTP clock support registered
Oct  9 09:31:57 compute-1 kernel: EDAC MC: Ver: 3.0.0
Oct  9 09:31:57 compute-1 kernel: NetLabel: Initializing
Oct  9 09:31:57 compute-1 kernel: NetLabel:  domain hash size = 128
Oct  9 09:31:57 compute-1 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  9 09:31:57 compute-1 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  9 09:31:57 compute-1 kernel: PCI: Using ACPI for IRQ routing
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  9 09:31:57 compute-1 kernel: vgaarb: loaded
Oct  9 09:31:57 compute-1 kernel: clocksource: Switched to clocksource kvm-clock
Oct  9 09:31:57 compute-1 kernel: VFS: Disk quotas dquot_6.6.0
Oct  9 09:31:57 compute-1 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  9 09:31:57 compute-1 kernel: pnp: PnP ACPI init
Oct  9 09:31:57 compute-1 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Oct  9 09:31:57 compute-1 kernel: pnp: PnP ACPI: found 5 devices
Oct  9 09:31:57 compute-1 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  9 09:31:57 compute-1 kernel: NET: Registered PF_INET protocol family
Oct  9 09:31:57 compute-1 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  9 09:31:57 compute-1 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 09:31:57 compute-1 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  9 09:31:57 compute-1 kernel: NET: Registered PF_XDP protocol family
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 09:31:57 compute-1 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Oct  9 09:31:57 compute-1 kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 09:31:57 compute-1 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Oct  9 09:31:57 compute-1 kernel: PCI: CLS 0 bytes, default 64
Oct  9 09:31:57 compute-1 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  9 09:31:57 compute-1 kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Oct  9 09:31:57 compute-1 kernel: ACPI: bus type thunderbolt registered
Oct  9 09:31:57 compute-1 kernel: Trying to unpack rootfs image as initramfs...
Oct  9 09:31:57 compute-1 kernel: Initialise system trusted keyrings
Oct  9 09:31:57 compute-1 kernel: Key type blacklist registered
Oct  9 09:31:57 compute-1 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  9 09:31:57 compute-1 kernel: zbud: loaded
Oct  9 09:31:57 compute-1 kernel: integrity: Platform Keyring initialized
Oct  9 09:31:57 compute-1 kernel: integrity: Machine keyring initialized
Oct  9 09:31:57 compute-1 kernel: Freeing initrd memory: 86104K
Oct  9 09:31:57 compute-1 kernel: NET: Registered PF_ALG protocol family
Oct  9 09:31:57 compute-1 kernel: xor: automatically using best checksumming function   avx       
Oct  9 09:31:57 compute-1 kernel: Key type asymmetric registered
Oct  9 09:31:57 compute-1 kernel: Asymmetric key parser 'x509' registered
Oct  9 09:31:57 compute-1 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  9 09:31:57 compute-1 kernel: io scheduler mq-deadline registered
Oct  9 09:31:57 compute-1 kernel: io scheduler kyber registered
Oct  9 09:31:57 compute-1 kernel: io scheduler bfq registered
Oct  9 09:31:57 compute-1 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Oct  9 09:31:57 compute-1 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Oct  9 09:31:57 compute-1 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Oct  9 09:31:57 compute-1 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Oct  9 09:31:57 compute-1 kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Oct  9 09:31:57 compute-1 kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Oct  9 09:31:57 compute-1 kernel: shpchp 0000:01:00.0: Slot initialization failed
Oct  9 09:31:57 compute-1 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  9 09:31:57 compute-1 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  9 09:31:57 compute-1 kernel: ACPI: button: Power Button [PWRF]
Oct  9 09:31:57 compute-1 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Oct  9 09:31:57 compute-1 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  9 09:31:57 compute-1 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  9 09:31:57 compute-1 kernel: Non-volatile memory driver v1.3
Oct  9 09:31:57 compute-1 kernel: rdac: device handler registered
Oct  9 09:31:57 compute-1 kernel: hp_sw: device handler registered
Oct  9 09:31:57 compute-1 kernel: emc: device handler registered
Oct  9 09:31:57 compute-1 kernel: alua: device handler registered
Oct  9 09:31:57 compute-1 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Oct  9 09:31:57 compute-1 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Oct  9 09:31:57 compute-1 kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Oct  9 09:31:57 compute-1 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Oct  9 09:31:57 compute-1 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  9 09:31:57 compute-1 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  9 09:31:57 compute-1 kernel: usb usb1: Product: UHCI Host Controller
Oct  9 09:31:57 compute-1 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  9 09:31:57 compute-1 kernel: usb usb1: SerialNumber: 0000:02:01.0
Oct  9 09:31:57 compute-1 kernel: hub 1-0:1.0: USB hub found
Oct  9 09:31:57 compute-1 kernel: hub 1-0:1.0: 2 ports detected
Oct  9 09:31:57 compute-1 kernel: usbcore: registered new interface driver usbserial_generic
Oct  9 09:31:57 compute-1 kernel: usbserial: USB Serial support registered for generic
Oct  9 09:31:57 compute-1 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  9 09:31:57 compute-1 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  9 09:31:57 compute-1 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  9 09:31:57 compute-1 kernel: mousedev: PS/2 mouse device common for all mice
Oct  9 09:31:57 compute-1 kernel: rtc_cmos 00:03: RTC can wake from S4
Oct  9 09:31:57 compute-1 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  9 09:31:57 compute-1 kernel: rtc_cmos 00:03: registered as rtc0
Oct  9 09:31:57 compute-1 kernel: rtc_cmos 00:03: setting system clock to 2025-10-09T09:31:57 UTC (1760002317)
Oct  9 09:31:57 compute-1 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Oct  9 09:31:57 compute-1 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  9 09:31:57 compute-1 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  9 09:31:57 compute-1 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  9 09:31:57 compute-1 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  9 09:31:57 compute-1 kernel: usbcore: registered new interface driver usbhid
Oct  9 09:31:57 compute-1 kernel: usbhid: USB HID core driver
Oct  9 09:31:57 compute-1 kernel: drop_monitor: Initializing network drop monitor service
Oct  9 09:31:57 compute-1 kernel: Initializing XFRM netlink socket
Oct  9 09:31:57 compute-1 kernel: NET: Registered PF_INET6 protocol family
Oct  9 09:31:57 compute-1 kernel: Segment Routing with IPv6
Oct  9 09:31:57 compute-1 kernel: NET: Registered PF_PACKET protocol family
Oct  9 09:31:57 compute-1 kernel: mpls_gso: MPLS GSO support
Oct  9 09:31:57 compute-1 kernel: IPI shorthand broadcast: enabled
Oct  9 09:31:57 compute-1 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  9 09:31:57 compute-1 kernel: AES CTR mode by8 optimization enabled
Oct  9 09:31:57 compute-1 kernel: sched_clock: Marking stable (1135001778, 143075661)->(1389961781, -111884342)
Oct  9 09:31:57 compute-1 kernel: registered taskstats version 1
Oct  9 09:31:57 compute-1 kernel: Loading compiled-in X.509 certificates
Oct  9 09:31:57 compute-1 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 09:31:57 compute-1 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  9 09:31:57 compute-1 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  9 09:31:57 compute-1 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  9 09:31:57 compute-1 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  9 09:31:57 compute-1 kernel: Demotion targets for Node 0: null
Oct  9 09:31:57 compute-1 kernel: page_owner is disabled
Oct  9 09:31:57 compute-1 kernel: Key type .fscrypt registered
Oct  9 09:31:57 compute-1 kernel: Key type fscrypt-provisioning registered
Oct  9 09:31:57 compute-1 kernel: Key type big_key registered
Oct  9 09:31:57 compute-1 kernel: Key type encrypted registered
Oct  9 09:31:57 compute-1 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  9 09:31:57 compute-1 kernel: Loading compiled-in module X.509 certificates
Oct  9 09:31:57 compute-1 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 09:31:57 compute-1 kernel: ima: Allocated hash algorithm: sha256
Oct  9 09:31:57 compute-1 kernel: ima: No architecture policies found
Oct  9 09:31:57 compute-1 kernel: evm: Initialising EVM extended attributes:
Oct  9 09:31:57 compute-1 kernel: evm: security.selinux
Oct  9 09:31:57 compute-1 kernel: evm: security.SMACK64 (disabled)
Oct  9 09:31:57 compute-1 kernel: evm: security.SMACK64EXEC (disabled)
Oct  9 09:31:57 compute-1 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  9 09:31:57 compute-1 kernel: evm: security.SMACK64MMAP (disabled)
Oct  9 09:31:57 compute-1 kernel: evm: security.apparmor (disabled)
Oct  9 09:31:57 compute-1 kernel: evm: security.ima
Oct  9 09:31:57 compute-1 kernel: evm: security.capability
Oct  9 09:31:57 compute-1 kernel: evm: HMAC attrs: 0x1
Oct  9 09:31:57 compute-1 kernel: Running certificate verification RSA selftest
Oct  9 09:31:57 compute-1 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  9 09:31:57 compute-1 kernel: Running certificate verification ECDSA selftest
Oct  9 09:31:57 compute-1 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  9 09:31:57 compute-1 kernel: clk: Disabling unused clocks
Oct  9 09:31:57 compute-1 kernel: Freeing unused decrypted memory: 2028K
Oct  9 09:31:57 compute-1 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  9 09:31:57 compute-1 kernel: Write protecting the kernel read-only data: 30720k
Oct  9 09:31:57 compute-1 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  9 09:31:57 compute-1 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  9 09:31:57 compute-1 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  9 09:31:57 compute-1 kernel: Run /init as init process
Oct  9 09:31:57 compute-1 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 09:31:57 compute-1 systemd: Detected virtualization kvm.
Oct  9 09:31:57 compute-1 systemd: Detected architecture x86-64.
Oct  9 09:31:57 compute-1 systemd: Running in initrd.
Oct  9 09:31:57 compute-1 systemd: No hostname configured, using default hostname.
Oct  9 09:31:57 compute-1 systemd: Hostname set to <localhost>.
Oct  9 09:31:57 compute-1 systemd: Initializing machine ID from VM UUID.
Oct  9 09:31:57 compute-1 systemd: Queued start job for default target Initrd Default Target.
Oct  9 09:31:57 compute-1 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 09:31:57 compute-1 systemd: Reached target Local Encrypted Volumes.
Oct  9 09:31:57 compute-1 systemd: Reached target Initrd /usr File System.
Oct  9 09:31:57 compute-1 systemd: Reached target Local File Systems.
Oct  9 09:31:57 compute-1 systemd: Reached target Path Units.
Oct  9 09:31:57 compute-1 systemd: Reached target Slice Units.
Oct  9 09:31:57 compute-1 systemd: Reached target Swaps.
Oct  9 09:31:57 compute-1 systemd: Reached target Timer Units.
Oct  9 09:31:57 compute-1 systemd: Listening on D-Bus System Message Bus Socket.
Oct  9 09:31:57 compute-1 systemd: Listening on Journal Socket (/dev/log).
Oct  9 09:31:57 compute-1 systemd: Listening on Journal Socket.
Oct  9 09:31:57 compute-1 systemd: Listening on udev Control Socket.
Oct  9 09:31:57 compute-1 systemd: Listening on udev Kernel Socket.
Oct  9 09:31:57 compute-1 systemd: Reached target Socket Units.
Oct  9 09:31:57 compute-1 systemd: Starting Create List of Static Device Nodes...
Oct  9 09:31:57 compute-1 systemd: Starting Journal Service...
Oct  9 09:31:57 compute-1 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 09:31:57 compute-1 systemd: Starting Apply Kernel Variables...
Oct  9 09:31:57 compute-1 systemd: Starting Create System Users...
Oct  9 09:31:57 compute-1 systemd: Starting Setup Virtual Console...
Oct  9 09:31:57 compute-1 systemd: Finished Create List of Static Device Nodes.
Oct  9 09:31:57 compute-1 systemd: Finished Apply Kernel Variables.
Oct  9 09:31:57 compute-1 systemd: Finished Create System Users.
Oct  9 09:31:57 compute-1 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  9 09:31:57 compute-1 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  9 09:31:57 compute-1 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  9 09:31:57 compute-1 kernel: usb 1-1: Manufacturer: QEMU
Oct  9 09:31:57 compute-1 kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Oct  9 09:31:57 compute-1 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  9 09:31:57 compute-1 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Oct  9 09:31:57 compute-1 systemd-journald[282]: Journal started
Oct  9 09:31:57 compute-1 systemd-journald[282]: Runtime Journal (/run/log/journal/99ca1aa4a8fe49f8801977dd20980206) is 8.0M, max 153.6M, 145.6M free.
Oct  9 09:31:57 compute-1 systemd-sysusers[285]: Creating group 'users' with GID 100.
Oct  9 09:31:57 compute-1 systemd-sysusers[285]: Creating group 'dbus' with GID 81.
Oct  9 09:31:57 compute-1 systemd-sysusers[285]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  9 09:31:57 compute-1 systemd: Started Journal Service.
Oct  9 09:31:57 compute-1 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 09:31:57 compute-1 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 09:31:57 compute-1 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 09:31:58 compute-1 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 09:31:58 compute-1 systemd[1]: Finished Setup Virtual Console.
Oct  9 09:31:58 compute-1 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  9 09:31:58 compute-1 systemd[1]: Starting dracut cmdline hook...
Oct  9 09:31:58 compute-1 dracut-cmdline[301]: dracut-9 dracut-057-102.git20250818.el9
Oct  9 09:31:58 compute-1 dracut-cmdline[301]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 09:31:58 compute-1 systemd[1]: Finished dracut cmdline hook.
Oct  9 09:31:58 compute-1 systemd[1]: Starting dracut pre-udev hook...
Oct  9 09:31:58 compute-1 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  9 09:31:58 compute-1 kernel: device-mapper: uevent: version 1.0.3
Oct  9 09:31:58 compute-1 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  9 09:31:58 compute-1 kernel: RPC: Registered named UNIX socket transport module.
Oct  9 09:31:58 compute-1 kernel: RPC: Registered udp transport module.
Oct  9 09:31:58 compute-1 kernel: RPC: Registered tcp transport module.
Oct  9 09:31:58 compute-1 kernel: RPC: Registered tcp-with-tls transport module.
Oct  9 09:31:58 compute-1 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  9 09:31:58 compute-1 rpc.statd[416]: Version 2.5.4 starting
Oct  9 09:31:58 compute-1 rpc.statd[416]: Initializing NSM state
Oct  9 09:31:58 compute-1 rpc.idmapd[421]: Setting log level to 0
Oct  9 09:31:58 compute-1 systemd[1]: Finished dracut pre-udev hook.
Oct  9 09:31:58 compute-1 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 09:31:58 compute-1 systemd-udevd[434]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 09:31:58 compute-1 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 09:31:58 compute-1 systemd[1]: Starting dracut pre-trigger hook...
Oct  9 09:31:58 compute-1 systemd[1]: Finished dracut pre-trigger hook.
Oct  9 09:31:58 compute-1 systemd[1]: Starting Coldplug All udev Devices...
Oct  9 09:31:58 compute-1 systemd[1]: Created slice Slice /system/modprobe.
Oct  9 09:31:58 compute-1 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 09:31:58 compute-1 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 09:31:58 compute-1 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 09:31:58 compute-1 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 09:31:58 compute-1 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 09:31:58 compute-1 systemd[1]: Reached target Network.
Oct  9 09:31:58 compute-1 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 09:31:58 compute-1 systemd[1]: Starting dracut initqueue hook...
Oct  9 09:31:58 compute-1 kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Oct  9 09:31:58 compute-1 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  9 09:31:58 compute-1 kernel: vda: vda1
Oct  9 09:31:58 compute-1 systemd-udevd[449]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:31:58 compute-1 systemd-udevd[451]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:31:58 compute-1 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 09:31:58 compute-1 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Oct  9 09:31:58 compute-1 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Oct  9 09:31:58 compute-1 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Oct  9 09:31:58 compute-1 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Oct  9 09:31:58 compute-1 kernel: scsi host0: ahci
Oct  9 09:31:58 compute-1 kernel: scsi host1: ahci
Oct  9 09:31:58 compute-1 kernel: scsi host2: ahci
Oct  9 09:31:58 compute-1 kernel: scsi host3: ahci
Oct  9 09:31:58 compute-1 kernel: scsi host4: ahci
Oct  9 09:31:58 compute-1 kernel: scsi host5: ahci
Oct  9 09:31:58 compute-1 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 52 lpm-pol 0
Oct  9 09:31:58 compute-1 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 52 lpm-pol 0
Oct  9 09:31:58 compute-1 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 52 lpm-pol 0
Oct  9 09:31:58 compute-1 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 52 lpm-pol 0
Oct  9 09:31:58 compute-1 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 52 lpm-pol 0
Oct  9 09:31:58 compute-1 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 52 lpm-pol 0
Oct  9 09:31:58 compute-1 systemd[1]: Reached target Initrd Root Device.
Oct  9 09:31:58 compute-1 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:58 compute-1 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Oct  9 09:31:58 compute-1 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:58 compute-1 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:58 compute-1 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:58 compute-1 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  9 09:31:58 compute-1 kernel: ata1.00: applying bridge limits
Oct  9 09:31:58 compute-1 kernel: ata1.00: configured for UDMA/100
Oct  9 09:31:58 compute-1 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  9 09:31:58 compute-1 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:58 compute-1 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  9 09:31:58 compute-1 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  9 09:31:58 compute-1 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  9 09:31:58 compute-1 systemd[1]: Mounting Kernel Configuration File System...
Oct  9 09:31:58 compute-1 systemd[1]: Mounted Kernel Configuration File System.
Oct  9 09:31:58 compute-1 systemd[1]: Reached target System Initialization.
Oct  9 09:31:58 compute-1 systemd[1]: Reached target Basic System.
Oct  9 09:31:58 compute-1 systemd[1]: Finished dracut initqueue hook.
Oct  9 09:31:58 compute-1 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 09:31:58 compute-1 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  9 09:31:58 compute-1 systemd[1]: Reached target Remote File Systems.
Oct  9 09:31:58 compute-1 systemd[1]: Starting dracut pre-mount hook...
Oct  9 09:31:58 compute-1 systemd[1]: Finished dracut pre-mount hook.
Oct  9 09:31:58 compute-1 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  9 09:31:59 compute-1 systemd-fsck[527]: /usr/sbin/fsck.xfs: XFS file system.
Oct  9 09:31:59 compute-1 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 09:31:59 compute-1 systemd[1]: Mounting /sysroot...
Oct  9 09:31:59 compute-1 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  9 09:31:59 compute-1 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  9 09:31:59 compute-1 kernel: XFS (vda1): Ending clean mount
Oct  9 09:31:59 compute-1 systemd[1]: Mounted /sysroot.
Oct  9 09:31:59 compute-1 systemd[1]: Reached target Initrd Root File System.
Oct  9 09:31:59 compute-1 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  9 09:31:59 compute-1 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  9 09:31:59 compute-1 systemd[1]: Reached target Initrd File Systems.
Oct  9 09:31:59 compute-1 systemd[1]: Reached target Initrd Default Target.
Oct  9 09:31:59 compute-1 systemd[1]: Starting dracut mount hook...
Oct  9 09:31:59 compute-1 systemd[1]: Finished dracut mount hook.
Oct  9 09:31:59 compute-1 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  9 09:31:59 compute-1 rpc.idmapd[421]: exiting on signal 15
Oct  9 09:31:59 compute-1 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  9 09:31:59 compute-1 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Network.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Timer Units.
Oct  9 09:31:59 compute-1 systemd[1]: dbus.socket: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  9 09:31:59 compute-1 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Initrd Default Target.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Basic System.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Initrd Root Device.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Initrd /usr File System.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Path Units.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Remote File Systems.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Slice Units.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Socket Units.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target System Initialization.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Local File Systems.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Swaps.
Oct  9 09:31:59 compute-1 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped dracut mount hook.
Oct  9 09:31:59 compute-1 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped dracut pre-mount hook.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  9 09:31:59 compute-1 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  9 09:31:59 compute-1 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped dracut initqueue hook.
Oct  9 09:31:59 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 09:31:59 compute-1 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  9 09:31:59 compute-1 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped Coldplug All udev Devices.
Oct  9 09:31:59 compute-1 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped dracut pre-trigger hook.
Oct  9 09:31:59 compute-1 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  9 09:31:59 compute-1 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped Setup Virtual Console.
Oct  9 09:31:59 compute-1 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  9 09:31:59 compute-1 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  9 09:31:59 compute-1 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Closed udev Control Socket.
Oct  9 09:31:59 compute-1 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Closed udev Kernel Socket.
Oct  9 09:31:59 compute-1 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped dracut pre-udev hook.
Oct  9 09:31:59 compute-1 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped dracut cmdline hook.
Oct  9 09:31:59 compute-1 systemd[1]: Starting Cleanup udev Database...
Oct  9 09:31:59 compute-1 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  9 09:31:59 compute-1 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  9 09:31:59 compute-1 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Stopped Create System Users.
Oct  9 09:31:59 compute-1 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  9 09:31:59 compute-1 systemd[1]: Finished Cleanup udev Database.
Oct  9 09:31:59 compute-1 systemd[1]: Reached target Switch Root.
Oct  9 09:31:59 compute-1 systemd[1]: Starting Switch Root...
Oct  9 09:31:59 compute-1 systemd[1]: Switching root.
Oct  9 09:31:59 compute-1 systemd-journald[282]: Received SIGTERM from PID 1 (systemd).
Oct  9 09:31:59 compute-1 systemd-journald[282]: Journal stopped
Oct  9 09:32:00 compute-1 kernel: audit: type=1404 audit(1760002319.671:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  9 09:32:00 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:32:00 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:32:00 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:32:00 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:32:00 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:32:00 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:32:00 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:32:00 compute-1 kernel: audit: type=1403 audit(1760002319.779:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  9 09:32:00 compute-1 systemd: Successfully loaded SELinux policy in 110.681ms.
Oct  9 09:32:00 compute-1 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.967ms.
Oct  9 09:32:00 compute-1 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 09:32:00 compute-1 systemd: Detected virtualization kvm.
Oct  9 09:32:00 compute-1 systemd: Detected architecture x86-64.
Oct  9 09:32:00 compute-1 systemd: Hostname set to <compute-1>.
Oct  9 09:32:00 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:32:00 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:32:00 compute-1 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  9 09:32:00 compute-1 systemd: Stopped Switch Root.
Oct  9 09:32:00 compute-1 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  9 09:32:00 compute-1 systemd: Created slice Slice /system/getty.
Oct  9 09:32:00 compute-1 systemd: Created slice Slice /system/serial-getty.
Oct  9 09:32:00 compute-1 systemd: Created slice Slice /system/sshd-keygen.
Oct  9 09:32:00 compute-1 systemd: Created slice User and Session Slice.
Oct  9 09:32:00 compute-1 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 09:32:00 compute-1 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  9 09:32:00 compute-1 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  9 09:32:00 compute-1 systemd: Reached target Local Encrypted Volumes.
Oct  9 09:32:00 compute-1 systemd: Stopped target Switch Root.
Oct  9 09:32:00 compute-1 systemd: Stopped target Initrd File Systems.
Oct  9 09:32:00 compute-1 systemd: Stopped target Initrd Root File System.
Oct  9 09:32:00 compute-1 systemd: Reached target Local Integrity Protected Volumes.
Oct  9 09:32:00 compute-1 systemd: Reached target Path Units.
Oct  9 09:32:00 compute-1 systemd: Reached target rpc_pipefs.target.
Oct  9 09:32:00 compute-1 systemd: Reached target Slice Units.
Oct  9 09:32:00 compute-1 systemd: Reached target Local Verity Protected Volumes.
Oct  9 09:32:00 compute-1 systemd: Listening on Device-mapper event daemon FIFOs.
Oct  9 09:32:00 compute-1 systemd: Listening on LVM2 poll daemon socket.
Oct  9 09:32:00 compute-1 systemd: Listening on RPCbind Server Activation Socket.
Oct  9 09:32:00 compute-1 systemd: Reached target RPC Port Mapper.
Oct  9 09:32:00 compute-1 systemd: Listening on Process Core Dump Socket.
Oct  9 09:32:00 compute-1 systemd: Listening on initctl Compatibility Named Pipe.
Oct  9 09:32:00 compute-1 systemd: Listening on udev Control Socket.
Oct  9 09:32:00 compute-1 systemd: Listening on udev Kernel Socket.
Oct  9 09:32:00 compute-1 systemd: Mounting Huge Pages File System...
Oct  9 09:32:00 compute-1 systemd: Mounting /dev/hugepages1G...
Oct  9 09:32:00 compute-1 systemd: Mounting /dev/hugepages2M...
Oct  9 09:32:00 compute-1 systemd: Mounting POSIX Message Queue File System...
Oct  9 09:32:00 compute-1 systemd: Mounting Kernel Debug File System...
Oct  9 09:32:00 compute-1 systemd: Mounting Kernel Trace File System...
Oct  9 09:32:00 compute-1 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 09:32:00 compute-1 systemd: Starting Create List of Static Device Nodes...
Oct  9 09:32:00 compute-1 systemd: Load legacy module configuration was skipped because no trigger condition checks were met.
Oct  9 09:32:00 compute-1 systemd: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  9 09:32:00 compute-1 systemd: Starting Load Kernel Module configfs...
Oct  9 09:32:00 compute-1 systemd: Starting Load Kernel Module drm...
Oct  9 09:32:00 compute-1 systemd: Starting Load Kernel Module efi_pstore...
Oct  9 09:32:00 compute-1 systemd: Starting Load Kernel Module fuse...
Oct  9 09:32:00 compute-1 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  9 09:32:00 compute-1 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  9 09:32:00 compute-1 systemd: Stopped File System Check on Root Device.
Oct  9 09:32:00 compute-1 systemd: Stopped Journal Service.
Oct  9 09:32:00 compute-1 kernel: fuse: init (API version 7.37)
Oct  9 09:32:00 compute-1 systemd: Starting Journal Service...
Oct  9 09:32:00 compute-1 systemd: Starting Load Kernel Modules...
Oct  9 09:32:00 compute-1 systemd: Starting Generate network units from Kernel command line...
Oct  9 09:32:00 compute-1 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 09:32:00 compute-1 systemd: Starting Remount Root and Kernel File Systems...
Oct  9 09:32:00 compute-1 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  9 09:32:00 compute-1 systemd: Starting Coldplug All udev Devices...
Oct  9 09:32:00 compute-1 kernel: ACPI: bus type drm_connector registered
Oct  9 09:32:00 compute-1 systemd: Mounted Huge Pages File System.
Oct  9 09:32:00 compute-1 systemd: Mounted /dev/hugepages1G.
Oct  9 09:32:00 compute-1 systemd-journald[658]: Journal started
Oct  9 09:32:00 compute-1 systemd-journald[658]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 09:32:00 compute-1 systemd[1]: Queued start job for default target Multi-User System.
Oct  9 09:32:00 compute-1 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  9 09:32:00 compute-1 systemd: Started Journal Service.
Oct  9 09:32:00 compute-1 systemd[1]: Mounted /dev/hugepages2M.
Oct  9 09:32:00 compute-1 systemd[1]: Mounted POSIX Message Queue File System.
Oct  9 09:32:00 compute-1 systemd[1]: Mounted Kernel Debug File System.
Oct  9 09:32:00 compute-1 systemd[1]: Mounted Kernel Trace File System.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Create List of Static Device Nodes.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  9 09:32:00 compute-1 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 09:32:00 compute-1 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  9 09:32:00 compute-1 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Load Kernel Module drm.
Oct  9 09:32:00 compute-1 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  9 09:32:00 compute-1 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Load Kernel Module fuse.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Generate network units from Kernel command line.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  9 09:32:00 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  9 09:32:00 compute-1 systemd-modules-load[659]: Inserted module 'br_netfilter'
Oct  9 09:32:00 compute-1 kernel: Bridge firewalling registered
Oct  9 09:32:00 compute-1 systemd[1]: Activating swap /swap...
Oct  9 09:32:00 compute-1 systemd[1]: Mounting FUSE Control File System...
Oct  9 09:32:00 compute-1 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 09:32:00 compute-1 systemd[1]: Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc).
Oct  9 09:32:00 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  9 09:32:00 compute-1 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  9 09:32:00 compute-1 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  9 09:32:00 compute-1 systemd[1]: Starting Load/Save OS Random Seed...
Oct  9 09:32:00 compute-1 systemd[1]: Create System Users was skipped because no trigger condition checks were met.
Oct  9 09:32:00 compute-1 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 09:32:00 compute-1 systemd[1]: Activated swap /swap.
Oct  9 09:32:00 compute-1 systemd-journald[658]: Time spent on flushing to /var/log/journal/42833e1b511a402df82cb9cb2fc36491 is 7.586ms for 1155 entries.
Oct  9 09:32:00 compute-1 systemd-journald[658]: System Journal (/var/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 4.0G, 3.9G free.
Oct  9 09:32:00 compute-1 systemd-journald[658]: Received client request to flush runtime journal.
Oct  9 09:32:00 compute-1 systemd[1]: Mounted FUSE Control File System.
Oct  9 09:32:00 compute-1 systemd[1]: Reached target Swaps.
Oct  9 09:32:00 compute-1 systemd-modules-load[659]: Inserted module 'nf_conntrack'
Oct  9 09:32:00 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct  9 09:32:00 compute-1 systemd[1]: Starting Apply Kernel Variables...
Oct  9 09:32:00 compute-1 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Load/Save OS Random Seed.
Oct  9 09:32:00 compute-1 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 09:32:00 compute-1 systemd[1]: Finished Apply Kernel Variables.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 09:32:00 compute-1 systemd[1]: Reached target Preparation for Local File Systems.
Oct  9 09:32:00 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  9 09:32:00 compute-1 systemd[1]: Reached target Local File Systems.
Oct  9 09:32:00 compute-1 systemd[1]: Starting Import network configuration from initramfs...
Oct  9 09:32:00 compute-1 systemd[1]: Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met.
Oct  9 09:32:00 compute-1 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  9 09:32:00 compute-1 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  9 09:32:00 compute-1 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  9 09:32:00 compute-1 systemd[1]: Starting Automatic Boot Loader Update...
Oct  9 09:32:00 compute-1 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  9 09:32:00 compute-1 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 09:32:00 compute-1 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 09:32:00 compute-1 bootctl[676]: Couldn't find EFI system partition, skipping.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Automatic Boot Loader Update.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Import network configuration from initramfs.
Oct  9 09:32:00 compute-1 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 09:32:00 compute-1 systemd-udevd[678]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 09:32:00 compute-1 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 09:32:00 compute-1 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 09:32:00 compute-1 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 09:32:00 compute-1 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  9 09:32:00 compute-1 systemd-udevd[718]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:32:00 compute-1 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 09:32:00 compute-1 systemd[1]: Starting Security Auditing Service...
Oct  9 09:32:00 compute-1 systemd[1]: Starting RPC Bind...
Oct  9 09:32:00 compute-1 systemd[1]: Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var).
Oct  9 09:32:00 compute-1 systemd[1]: Update is Completed was skipped because no trigger condition checks were met.
Oct  9 09:32:00 compute-1 auditd[730]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  9 09:32:00 compute-1 auditd[730]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  9 09:32:00 compute-1 systemd[1]: Started RPC Bind.
Oct  9 09:32:00 compute-1 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Oct  9 09:32:00 compute-1 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  9 09:32:00 compute-1 augenrules[735]: /sbin/augenrules: No change
Oct  9 09:32:00 compute-1 augenrules[754]: No rules
Oct  9 09:32:00 compute-1 augenrules[754]: enabled 1
Oct  9 09:32:00 compute-1 augenrules[754]: failure 1
Oct  9 09:32:00 compute-1 augenrules[754]: pid 730
Oct  9 09:32:00 compute-1 augenrules[754]: rate_limit 0
Oct  9 09:32:00 compute-1 augenrules[754]: backlog_limit 8192
Oct  9 09:32:00 compute-1 augenrules[754]: lost 0
Oct  9 09:32:00 compute-1 augenrules[754]: backlog 4
Oct  9 09:32:00 compute-1 augenrules[754]: backlog_wait_time 60000
Oct  9 09:32:00 compute-1 augenrules[754]: backlog_wait_time_actual 0
Oct  9 09:32:00 compute-1 augenrules[754]: enabled 1
Oct  9 09:32:00 compute-1 augenrules[754]: failure 1
Oct  9 09:32:00 compute-1 augenrules[754]: pid 730
Oct  9 09:32:00 compute-1 augenrules[754]: rate_limit 0
Oct  9 09:32:00 compute-1 augenrules[754]: backlog_limit 8192
Oct  9 09:32:00 compute-1 augenrules[754]: lost 0
Oct  9 09:32:00 compute-1 augenrules[754]: backlog 4
Oct  9 09:32:00 compute-1 augenrules[754]: backlog_wait_time 60000
Oct  9 09:32:00 compute-1 augenrules[754]: backlog_wait_time_actual 0
Oct  9 09:32:00 compute-1 augenrules[754]: enabled 1
Oct  9 09:32:00 compute-1 augenrules[754]: failure 1
Oct  9 09:32:00 compute-1 augenrules[754]: pid 730
Oct  9 09:32:00 compute-1 augenrules[754]: rate_limit 0
Oct  9 09:32:00 compute-1 augenrules[754]: backlog_limit 8192
Oct  9 09:32:00 compute-1 augenrules[754]: lost 0
Oct  9 09:32:00 compute-1 augenrules[754]: backlog 4
Oct  9 09:32:00 compute-1 augenrules[754]: backlog_wait_time 60000
Oct  9 09:32:00 compute-1 augenrules[754]: backlog_wait_time_actual 0
Oct  9 09:32:00 compute-1 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Oct  9 09:32:00 compute-1 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  9 09:32:00 compute-1 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  9 09:32:00 compute-1 systemd[1]: Started Security Auditing Service.
Oct  9 09:32:00 compute-1 systemd-udevd[707]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:32:00 compute-1 kernel: iTCO_vendor_support: vendor-support=0
Oct  9 09:32:00 compute-1 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  9 09:32:00 compute-1 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Oct  9 09:32:00 compute-1 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Oct  9 09:32:00 compute-1 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  9 09:32:00 compute-1 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Oct  9 09:32:00 compute-1 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Oct  9 09:32:00 compute-1 kernel: Console: switching to colour dummy device 80x25
Oct  9 09:32:00 compute-1 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  9 09:32:00 compute-1 kernel: [drm] features: -context_init
Oct  9 09:32:00 compute-1 kernel: [drm] number of scanouts: 1
Oct  9 09:32:00 compute-1 kernel: [drm] number of cap sets: 0
Oct  9 09:32:00 compute-1 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Oct  9 09:32:00 compute-1 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  9 09:32:00 compute-1 kernel: Console: switching to colour frame buffer device 160x50
Oct  9 09:32:00 compute-1 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  9 09:32:00 compute-1 kernel: kvm_amd: TSC scaling supported
Oct  9 09:32:00 compute-1 kernel: kvm_amd: Nested Virtualization enabled
Oct  9 09:32:00 compute-1 kernel: kvm_amd: Nested Paging enabled
Oct  9 09:32:00 compute-1 kernel: kvm_amd: LBR virtualization supported
Oct  9 09:32:00 compute-1 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Oct  9 09:32:00 compute-1 kernel: kvm_amd: Virtual GIF supported
Oct  9 09:32:01 compute-1 systemd[1]: Reached target System Initialization.
Oct  9 09:32:01 compute-1 systemd[1]: Started dnf makecache --timer.
Oct  9 09:32:01 compute-1 systemd[1]: Started Daily rotation of log files.
Oct  9 09:32:01 compute-1 systemd[1]: Started Run system activity accounting tool every 10 minutes.
Oct  9 09:32:01 compute-1 systemd[1]: Started Generate summary of yesterday's process accounting.
Oct  9 09:32:01 compute-1 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  9 09:32:01 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  9 09:32:01 compute-1 systemd[1]: Reached target Timer Units.
Oct  9 09:32:01 compute-1 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  9 09:32:01 compute-1 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  9 09:32:01 compute-1 systemd[1]: Reached target Socket Units.
Oct  9 09:32:01 compute-1 systemd[1]: Starting D-Bus System Message Bus...
Oct  9 09:32:01 compute-1 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 09:32:01 compute-1 systemd[1]: Started D-Bus System Message Bus.
Oct  9 09:32:01 compute-1 systemd[1]: Reached target Basic System.
Oct  9 09:32:01 compute-1 dbus-broker-lau[789]: Ready
Oct  9 09:32:01 compute-1 systemd[1]: Starting NTP client/server...
Oct  9 09:32:01 compute-1 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  9 09:32:01 compute-1 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  9 09:32:01 compute-1 systemd[1]: Started irqbalance daemon.
Oct  9 09:32:01 compute-1 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  9 09:32:01 compute-1 systemd[1]: Starting Create netns directory...
Oct  9 09:32:01 compute-1 systemd[1]: Starting Netfilter Tables...
Oct  9 09:32:01 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:32:01 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:32:01 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:32:01 compute-1 systemd[1]: Reached target sshd-keygen.target.
Oct  9 09:32:01 compute-1 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  9 09:32:01 compute-1 systemd[1]: Reached target User and Group Name Lookups.
Oct  9 09:32:01 compute-1 systemd[1]: Starting Resets System Activity Logs...
Oct  9 09:32:01 compute-1 systemd[1]: Starting User Login Management...
Oct  9 09:32:01 compute-1 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  9 09:32:01 compute-1 systemd[1]: Finished Resets System Activity Logs.
Oct  9 09:32:01 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:32:01 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:32:01 compute-1 systemd[1]: Finished Create netns directory.
Oct  9 09:32:01 compute-1 systemd-logind[798]: New seat seat0.
Oct  9 09:32:01 compute-1 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 09:32:01 compute-1 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 09:32:01 compute-1 chronyd[805]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 09:32:01 compute-1 systemd[1]: Started User Login Management.
Oct  9 09:32:01 compute-1 chronyd[805]: Frequency -10.194 +/- 4.106 ppm read from /var/lib/chrony/drift
Oct  9 09:32:01 compute-1 chronyd[805]: Loaded seccomp filter (level 2)
Oct  9 09:32:01 compute-1 systemd[1]: Started NTP client/server.
Oct  9 09:32:01 compute-1 systemd[1]: Finished Netfilter Tables.
Oct  9 09:32:01 compute-1 cloud-init[824]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 09:32:01 +0000. Up 5.32 seconds.
Oct  9 09:32:01 compute-1 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  9 09:32:01 compute-1 systemd[1]: Reached target Preparation for Network.
Oct  9 09:32:01 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Oct  9 09:32:01 compute-1 chown[826]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  9 09:32:01 compute-1 ovs-ctl[831]: Starting ovsdb-server [  OK  ]
Oct  9 09:32:01 compute-1 ovs-vsctl[880]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  9 09:32:02 compute-1 ovs-vsctl[890]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"1479fb1d-afaa-427a-bdce-40294d3573d2\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  9 09:32:02 compute-1 ovs-ctl[831]: Configuring Open vSwitch system IDs [  OK  ]
Oct  9 09:32:02 compute-1 ovs-ctl[831]: Enabling remote OVSDB managers [  OK  ]
Oct  9 09:32:02 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Oct  9 09:32:02 compute-1 ovs-vsctl[896]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct  9 09:32:02 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  9 09:32:02 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  9 09:32:02 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  9 09:32:02 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Oct  9 09:32:02 compute-1 ovs-ctl[940]: Inserting openvswitch module [  OK  ]
Oct  9 09:32:02 compute-1 kernel: ovs-system: entered promiscuous mode
Oct  9 09:32:02 compute-1 kernel: Timeout policy base is empty
Oct  9 09:32:02 compute-1 systemd-udevd[699]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:32:02 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 09:32:02 compute-1 kernel: vlan22: entered promiscuous mode
Oct  9 09:32:02 compute-1 systemd-udevd[695]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:32:02 compute-1 kernel: vlan20: entered promiscuous mode
Oct  9 09:32:02 compute-1 kernel: vlan21: entered promiscuous mode
Oct  9 09:32:02 compute-1 kernel: vlan23: entered promiscuous mode
Oct  9 09:32:02 compute-1 ovs-ctl[909]: Starting ovs-vswitchd [  OK  ]
Oct  9 09:32:02 compute-1 ovs-ctl[909]: Enabling remote OVSDB managers [  OK  ]
Oct  9 09:32:02 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  9 09:32:02 compute-1 ovs-vsctl[980]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct  9 09:32:02 compute-1 systemd[1]: Starting Open vSwitch...
Oct  9 09:32:02 compute-1 systemd[1]: Finished Open vSwitch.
Oct  9 09:32:02 compute-1 systemd[1]: Starting Network Manager...
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.3925] NetworkManager (version 1.54.1-1.el9) is starting... (boot:f3feb77a-4486-4a5c-a8ab-abb4fe64c670)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.3927] Read config: /etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4012] manager[0x562384388040]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 09:32:02 compute-1 systemd[1]: Starting Hostname Service...
Oct  9 09:32:02 compute-1 systemd[1]: Started Hostname Service.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4607] hostname: hostname: using hostnamed
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4608] hostname: static hostname changed from (none) to "compute-1"
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4611] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4679] manager[0x562384388040]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4680] manager[0x562384388040]: rfkill: WWAN hardware radio set enabled
Oct  9 09:32:02 compute-1 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4715] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4731] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4732] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4732] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4733] manager: Networking is enabled by state file
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4736] settings: Loaded settings plugin: keyfile (internal)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4755] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4819] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4834] dhcp: init: Using DHCP client 'internal'
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4836] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4845] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4853] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4858] device (lo): Activation: starting connection 'lo' (34b34808-e917-4b30-9031-19e79dc85e7b)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4865] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4869] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4884] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4886] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4898] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/4)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4900] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4911] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/5)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4912] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4924] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/6)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4926] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4940] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/7)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4943] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4955] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/8)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4957] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4962] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4963] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4968] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4970] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4974] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/11)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4975] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4979] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/12)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4980] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4985] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/13)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4986] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4991] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/14)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.4993] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5000] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5002] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 systemd[1]: Started Network Manager.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5010] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5014] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5015] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5016] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5017] device (eth0): carrier: link connected
Oct  9 09:32:02 compute-1 systemd[1]: Reached target Network.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5018] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5019] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5019] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5020] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5020] device (eth1): carrier: link connected
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5026] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5031] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5035] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5038] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5041] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5044] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5047] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5051] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5052] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5053] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5055] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5056] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5057] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5058] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5062] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5063] policy: auto-activating connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5064] policy: auto-activating connection 'eth1-port' (14a02052-08d6-45e5-a948-6208b3559c65)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5065] policy: auto-activating connection 'br-ex-port' (375645b6-33fc-4c37-833a-d9e158ec94ba)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5065] policy: auto-activating connection 'br-ex-br' (4c59ca8f-34eb-40ef-9b98-d07f30800afa)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5066] policy: auto-activating connection 'vlan23-port' (9f7f603c-9def-45b8-9780-a4b31ecf01c3)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5067] policy: auto-activating connection 'vlan21-port' (b752045b-4ebf-405c-afa8-7f17ffa854e4)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5068] policy: auto-activating connection 'vlan20-port' (b852a895-766c-43f9-a4ca-5df9c8f35de0)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5068] policy: auto-activating connection 'vlan22-port' (bb0b9b67-5f18-4a32-a914-7d219a79010b)
Oct  9 09:32:02 compute-1 systemd[1]: Starting Network Manager Wait Online...
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5071] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5072] device (eth1): Activation: starting connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5074] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (14a02052-08d6-45e5-a948-6208b3559c65)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5075] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (375645b6-33fc-4c37-833a-d9e158ec94ba)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5078] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (4c59ca8f-34eb-40ef-9b98-d07f30800afa)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5080] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (9f7f603c-9def-45b8-9780-a4b31ecf01c3)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5081] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b752045b-4ebf-405c-afa8-7f17ffa854e4)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5082] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (b852a895-766c-43f9-a4ca-5df9c8f35de0)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5083] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (bb0b9b67-5f18-4a32-a914-7d219a79010b)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5085] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5086] manager: NetworkManager state is now CONNECTING
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5087] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5091] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5093] device (eth1): state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  9 09:32:02 compute-1 kernel: vlan22: left promiscuous mode
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5105] device (eth1): disconnecting for new activation request.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5112] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5118] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5121] device (br-ex)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5126] device (br-ex)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5128] device (eth1)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5134] device (eth1)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5134] device (vlan20)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5139] device (vlan20)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5139] device (vlan21)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5146] device (vlan21)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5151] device (vlan22)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5174] device (vlan22)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5174] device (vlan23)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5184] device (vlan23)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5184] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5189] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5190] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5199] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5203] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5218] device (eth1): disconnecting for new activation request.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5221] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 kernel: vlan23: left promiscuous mode
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5233] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5240] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 09:32:02 compute-1 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  9 09:32:02 compute-1 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 09:32:02 compute-1 systemd[1]: Reached target NFS client services.
Oct  9 09:32:02 compute-1 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 09:32:02 compute-1 systemd[1]: Reached target Remote File Systems.
Oct  9 09:32:02 compute-1 kernel: vlan21: left promiscuous mode
Oct  9 09:32:02 compute-1 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5319] dhcp4 (eth0): state changed new lease, address=192.168.26.45
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5331] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5368] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5376] device (lo): Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5383] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5385] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5395] device (eth1): Activation: starting connection 'ci-private-network' (66d16662-8a58-5f35-9b69-4caa739b599b)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5401] device (br-ex)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5409] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (375645b6-33fc-4c37-833a-d9e158ec94ba)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5411] device (eth1)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5413] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (14a02052-08d6-45e5-a948-6208b3559c65)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5414] device (vlan20)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5417] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (b852a895-766c-43f9-a4ca-5df9c8f35de0)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5424] device (vlan21)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 kernel: vlan20: left promiscuous mode
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5438] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b752045b-4ebf-405c-afa8-7f17ffa854e4)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5440] device (vlan22)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5443] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (bb0b9b67-5f18-4a32-a914-7d219a79010b)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5447] device (vlan23)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5454] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (9f7f603c-9def-45b8-9780-a4b31ecf01c3)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5459] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5465] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5472] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5512] policy: auto-activating connection 'vlan23-if' (7dd63e55-1e2e-4430-8cd7-274623908d35)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5514] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5518] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 kernel: virtio_net virtio5 eth1: left promiscuous mode
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5525] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5527] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5529] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5530] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5535] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5538] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5539] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5541] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5545] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5548] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5549] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5551] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5555] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5557] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5559] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5561] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5565] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5568] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5569] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5570] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5572] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5572] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5573] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5574] policy: auto-activating connection 'vlan21-if' (d6b7cede-f2f7-4e62-ad08-de1c498575f7)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5576] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 kernel: ovs-system: left promiscuous mode
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5579] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5594] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5600] policy: auto-activating connection 'vlan20-if' (b23c7af3-65be-42e5-ab4c-395931000901)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5602] policy: auto-activating connection 'vlan22-if' (fd6ff51b-7be2-4aef-b30a-8784503157ec)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5610] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5614] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5620] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7dd63e55-1e2e-4430-8cd7-274623908d35)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5621] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5626] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5630] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5635] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5640] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5646] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5651] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5654] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5656] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5659] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (d6b7cede-f2f7-4e62-ad08-de1c498575f7)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5660] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5667] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5675] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (b23c7af3-65be-42e5-ab4c-395931000901)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5677] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5682] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5687] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (fd6ff51b-7be2-4aef-b30a-8784503157ec)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5687] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5689] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5693] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5698] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5700] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5709] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5713] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5714] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5715] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5718] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 kernel: ovs-system: entered promiscuous mode
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5720] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5721] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5724] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5725] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5726] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5729] policy: auto-activating connection 'br-ex-if' (01e3ccc4-9f50-4868-b5b2-19c55181f7c5)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5730] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5733] device (eth0): Activation: successful, device activated.
Oct  9 09:32:02 compute-1 kernel: No such timeout policy "ovs_test_tp"
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5736] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5738] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5739] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5740] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5740] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5741] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5741] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5743] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5748] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5757] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5771] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5777] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5787] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5796] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (01e3ccc4-9f50-4868-b5b2-19c55181f7c5)
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5796] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5817] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5822] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5825] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5828] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5830] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5833] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5838] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5845] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5850] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5854] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5859] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5865] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5870] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5872] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5880] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5883] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5884] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 kernel: vlan23: entered promiscuous mode
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5889] device (eth1): Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5981] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.5989] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6012] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6014] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6019] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 kernel: br-ex: entered promiscuous mode
Oct  9 09:32:02 compute-1 kernel: vlan22: entered promiscuous mode
Oct  9 09:32:02 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 09:32:02 compute-1 kernel: vlan20: entered promiscuous mode
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6195] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6204] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 kernel: vlan21: entered promiscuous mode
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6222] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:02 compute-1 systemd-udevd[709]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6233] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6247] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6249] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6255] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6269] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6270] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6273] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6278] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6299] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6370] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6370] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6380] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6407] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6420] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6443] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6444] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6449] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:02 compute-1 NetworkManager[982]: <info>  [1760002322.6452] manager: startup complete
Oct  9 09:32:02 compute-1 systemd[1]: Finished Network Manager Wait Online.
Oct  9 09:32:02 compute-1 systemd[1]: Starting Cloud-init: Network Stage...
Oct  9 09:32:02 compute-1 systemd[1]: Starting Authorization Manager...
Oct  9 09:32:02 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 09:32:02 compute-1 polkitd[1120]: Started polkitd version 0.117
Oct  9 09:32:02 compute-1 systemd[1]: Started Authorization Manager.
Oct  9 09:32:02 compute-1 cloud-init[1206]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 09:32:02 +0000. Up 6.49 seconds.
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   Device   |   Up  |     Address     |      Mask     | Scope  |     Hw-Address    |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   br-ex    |  True | 192.168.122.101 | 255.255.255.0 | global | fa:16:3e:ab:2d:10 |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |    eth0    |  True |  192.168.26.45  | 255.255.255.0 | global | fa:16:3e:c5:2c:a9 |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |    eth1    |  True |        .        |       .       |   .    | fa:16:3e:ab:2d:10 |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |     lo     |  True |    127.0.0.1    |   255.0.0.0   |  host  |         .         |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |     lo     |  True |     ::1/128     |       .       |  host  |         .         |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: | ovs-system | False |        .        |       .       |   .    | ca:95:18:eb:be:48 |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   vlan20   |  True |   172.17.0.101  | 255.255.255.0 | global | 16:87:93:f4:28:e0 |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   vlan21   |  True |   172.18.0.101  | 255.255.255.0 | global | 96:b6:84:03:13:9c |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   vlan22   |  True |   172.19.0.101  | 255.255.255.0 | global | f6:8a:0c:d7:02:85 |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   vlan23   |  True |   172.20.0.101  | 255.255.255.0 | global | f2:e3:a0:5a:63:12 |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   2   |    172.17.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan20  |   U   |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   3   |    172.18.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan21  |   U   |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   4   |    172.19.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan22  |   U   |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   5   |    172.20.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan23  |   U   |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   6   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   7   |  192.168.122.0  |   0.0.0.0    |  255.255.255.0  |   br-ex   |   U   |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: |   2   |  multicast  |    ::   |    eth1   |   U   |
Oct  9 09:32:02 compute-1 cloud-init[1206]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 09:32:03 compute-1 systemd[1]: Finished Cloud-init: Network Stage.
Oct  9 09:32:03 compute-1 systemd[1]: Reached target Cloud-config availability.
Oct  9 09:32:03 compute-1 systemd[1]: Reached target Network is Online.
Oct  9 09:32:03 compute-1 systemd[1]: Starting Cloud-init: Config Stage...
Oct  9 09:32:03 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Oct  9 09:32:03 compute-1 systemd[1]: Starting Notify NFS peers of a restart...
Oct  9 09:32:03 compute-1 systemd[1]: Starting System Logging Service...
Oct  9 09:32:03 compute-1 sm-notify[1240]: Version 2.5.4 starting
Oct  9 09:32:03 compute-1 systemd[1]: Starting OpenSSH server daemon...
Oct  9 09:32:03 compute-1 systemd[1]: Starting Permit User Sessions...
Oct  9 09:32:03 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Oct  9 09:32:03 compute-1 systemd[1]: Started Notify NFS peers of a restart.
Oct  9 09:32:03 compute-1 systemd[1]: Finished Permit User Sessions.
Oct  9 09:32:03 compute-1 systemd[1]: Started Command Scheduler.
Oct  9 09:32:03 compute-1 systemd[1]: Started Getty on tty1.
Oct  9 09:32:03 compute-1 rsyslogd[1241]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1241" x-info="https://www.rsyslog.com"] start
Oct  9 09:32:03 compute-1 systemd[1]: Started Serial Getty on ttyS0.
Oct  9 09:32:03 compute-1 systemd[1]: Reached target Login Prompts.
Oct  9 09:32:03 compute-1 systemd[1]: Started OpenSSH server daemon.
Oct  9 09:32:03 compute-1 systemd[1]: Started System Logging Service.
Oct  9 09:32:03 compute-1 systemd[1]: Reached target Multi-User System.
Oct  9 09:32:03 compute-1 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  9 09:32:03 compute-1 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  9 09:32:03 compute-1 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  9 09:32:03 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:32:03 compute-1 cloud-init[1254]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 09:32:03 +0000. Up 6.95 seconds.
Oct  9 09:32:03 compute-1 systemd[1]: Finished Cloud-init: Config Stage.
Oct  9 09:32:03 compute-1 systemd[1]: Starting Cloud-init: Final Stage...
Oct  9 09:32:03 compute-1 cloud-init[1258]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 09:32:03 +0000. Up 7.26 seconds.
Oct  9 09:32:03 compute-1 cloud-init[1258]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 09:32:03 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 7.30 seconds
Oct  9 09:32:03 compute-1 systemd[1]: Finished Cloud-init: Final Stage.
Oct  9 09:32:03 compute-1 systemd[1]: Reached target Cloud-init target.
Oct  9 09:32:03 compute-1 systemd[1]: Startup finished in 1.378s (kernel) + 1.896s (initrd) + 4.074s (userspace) = 7.350s.
Oct  9 09:32:11 compute-1 irqbalance[794]: Cannot change IRQ 45 affinity: Operation not permitted
Oct  9 09:32:11 compute-1 irqbalance[794]: IRQ 45 affinity is now unmanaged
Oct  9 09:32:11 compute-1 irqbalance[794]: Cannot change IRQ 43 affinity: Operation not permitted
Oct  9 09:32:11 compute-1 irqbalance[794]: IRQ 43 affinity is now unmanaged
Oct  9 09:32:11 compute-1 irqbalance[794]: Cannot change IRQ 42 affinity: Operation not permitted
Oct  9 09:32:11 compute-1 irqbalance[794]: IRQ 42 affinity is now unmanaged
Oct  9 09:32:12 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 09:32:32 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 09:32:51 compute-1 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 09:32:51 compute-1 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 09:32:51 compute-1 systemd-logind[798]: New session 1 of user zuul.
Oct  9 09:32:51 compute-1 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 09:32:51 compute-1 systemd[1]: Starting User Manager for UID 1000...
Oct  9 09:32:52 compute-1 systemd[1268]: Queued start job for default target Main User Target.
Oct  9 09:32:52 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:32:52 compute-1 systemd[1268]: Created slice User Application Slice.
Oct  9 09:32:52 compute-1 systemd[1268]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:32:52 compute-1 systemd[1268]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:32:52 compute-1 systemd[1268]: Reached target Paths.
Oct  9 09:32:52 compute-1 systemd[1268]: Reached target Timers.
Oct  9 09:32:52 compute-1 systemd[1268]: Starting D-Bus User Message Bus Socket...
Oct  9 09:32:52 compute-1 systemd[1268]: Starting Create User's Volatile Files and Directories...
Oct  9 09:32:52 compute-1 systemd[1268]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:32:52 compute-1 systemd[1268]: Finished Create User's Volatile Files and Directories.
Oct  9 09:32:52 compute-1 systemd[1268]: Reached target Sockets.
Oct  9 09:32:52 compute-1 systemd[1268]: Reached target Basic System.
Oct  9 09:32:52 compute-1 systemd[1268]: Reached target Main User Target.
Oct  9 09:32:52 compute-1 systemd[1268]: Startup finished in 85ms.
Oct  9 09:32:52 compute-1 systemd[1]: Started User Manager for UID 1000.
Oct  9 09:32:52 compute-1 systemd[1]: Started Session 1 of User zuul.
Oct  9 09:32:52 compute-1 python3.9[1493]: ansible-ansible.builtin.file Invoked with path=/var/lib/openstack/reboot_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:32:53 compute-1 systemd-logind[798]: Session 1 logged out. Waiting for processes to exit.
Oct  9 09:32:53 compute-1 systemd[1]: session-1.scope: Deactivated successfully.
Oct  9 09:32:53 compute-1 systemd-logind[798]: Removed session 1.
Oct  9 09:32:59 compute-1 systemd-logind[798]: New session 3 of user zuul.
Oct  9 09:32:59 compute-1 systemd[1]: Started Session 3 of User zuul.
Oct  9 09:33:03 compute-1 python3[2259]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:33:05 compute-1 python3[2350]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 09:33:06 compute-1 python3[2377]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:33:06 compute-1 python3[2403]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:33:06 compute-1 kernel: loop: module loaded
Oct  9 09:33:06 compute-1 kernel: loop3: detected capacity change from 0 to 41943040
Oct  9 09:33:07 compute-1 python3[2438]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:33:07 compute-1 lvm[2441]: PV /dev/loop3 not used.
Oct  9 09:33:07 compute-1 lvm[2443]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:33:07 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  9 09:33:07 compute-1 lvm[2452]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:33:07 compute-1 lvm[2452]: VG ceph_vg0 finished
Oct  9 09:33:07 compute-1 lvm[2449]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  9 09:33:07 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  9 09:33:07 compute-1 python3[2530]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 09:33:07 compute-1 python3[2603]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002386.786316-33834-252251040756258/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:33:08 compute-1 python3[2653]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:33:08 compute-1 systemd[1]: Reloading.
Oct  9 09:33:08 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:33:08 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:33:08 compute-1 systemd[1]: Starting Ceph OSD losetup...
Oct  9 09:33:08 compute-1 bash[2693]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Oct  9 09:33:08 compute-1 systemd[1]: Finished Ceph OSD losetup.
Oct  9 09:33:08 compute-1 lvm[2694]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:33:08 compute-1 lvm[2694]: VG ceph_vg0 finished
Oct  9 09:33:10 compute-1 python3[2718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:34:15 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Oct  9 09:34:15 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  9 09:34:15 compute-1 systemd-logind[798]: New session 4 of user ceph-admin.
Oct  9 09:34:15 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  9 09:34:15 compute-1 systemd[1]: Starting User Manager for UID 42477...
Oct  9 09:34:15 compute-1 systemd[2766]: Queued start job for default target Main User Target.
Oct  9 09:34:15 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:34:15 compute-1 systemd[2766]: Created slice User Application Slice.
Oct  9 09:34:15 compute-1 systemd[2766]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:34:15 compute-1 systemd[2766]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:34:15 compute-1 systemd[2766]: Reached target Paths.
Oct  9 09:34:15 compute-1 systemd[2766]: Reached target Timers.
Oct  9 09:34:15 compute-1 systemd[2766]: Starting D-Bus User Message Bus Socket...
Oct  9 09:34:15 compute-1 systemd[2766]: Starting Create User's Volatile Files and Directories...
Oct  9 09:34:15 compute-1 systemd[2766]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:34:15 compute-1 systemd[2766]: Reached target Sockets.
Oct  9 09:34:15 compute-1 systemd[2766]: Finished Create User's Volatile Files and Directories.
Oct  9 09:34:15 compute-1 systemd[2766]: Reached target Basic System.
Oct  9 09:34:15 compute-1 systemd[1]: Started User Manager for UID 42477.
Oct  9 09:34:15 compute-1 systemd[2766]: Reached target Main User Target.
Oct  9 09:34:15 compute-1 systemd[2766]: Startup finished in 75ms.
Oct  9 09:34:15 compute-1 systemd[1]: Started Session 4 of User ceph-admin.
Oct  9 09:34:15 compute-1 systemd-logind[798]: New session 6 of user ceph-admin.
Oct  9 09:34:15 compute-1 systemd[1]: Started Session 6 of User ceph-admin.
Oct  9 09:34:15 compute-1 systemd-logind[798]: New session 7 of user ceph-admin.
Oct  9 09:34:15 compute-1 systemd[1]: Started Session 7 of User ceph-admin.
Oct  9 09:34:16 compute-1 systemd-logind[798]: New session 8 of user ceph-admin.
Oct  9 09:34:16 compute-1 systemd[1]: Started Session 8 of User ceph-admin.
Oct  9 09:34:16 compute-1 systemd-logind[798]: New session 9 of user ceph-admin.
Oct  9 09:34:16 compute-1 systemd[1]: Started Session 9 of User ceph-admin.
Oct  9 09:34:16 compute-1 systemd-logind[798]: New session 10 of user ceph-admin.
Oct  9 09:34:16 compute-1 systemd[1]: Started Session 10 of User ceph-admin.
Oct  9 09:34:16 compute-1 systemd-logind[798]: New session 11 of user ceph-admin.
Oct  9 09:34:16 compute-1 systemd[1]: Started Session 11 of User ceph-admin.
Oct  9 09:34:17 compute-1 systemd-logind[798]: New session 12 of user ceph-admin.
Oct  9 09:34:17 compute-1 systemd[1]: Started Session 12 of User ceph-admin.
Oct  9 09:34:17 compute-1 systemd-logind[798]: New session 13 of user ceph-admin.
Oct  9 09:34:17 compute-1 systemd[1]: Started Session 13 of User ceph-admin.
Oct  9 09:34:17 compute-1 systemd-logind[798]: New session 14 of user ceph-admin.
Oct  9 09:34:17 compute-1 systemd[1]: Started Session 14 of User ceph-admin.
Oct  9 09:34:18 compute-1 systemd-logind[798]: New session 15 of user ceph-admin.
Oct  9 09:34:18 compute-1 systemd[1]: Started Session 15 of User ceph-admin.
Oct  9 09:34:18 compute-1 systemd-logind[798]: New session 16 of user ceph-admin.
Oct  9 09:34:18 compute-1 systemd[1]: Started Session 16 of User ceph-admin.
Oct  9 09:34:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat3730103292-merged.mount: Deactivated successfully.
Oct  9 09:34:19 compute-1 kernel: evm: overlay not supported
Oct  9 09:34:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2499023546-merged.mount: Deactivated successfully.
Oct  9 09:34:19 compute-1 podman[3099]: 2025-10-09 09:34:19.086844589 +0000 UTC m=+0.063795404 system refresh
Oct  9 09:34:20 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 3327 (sysctl)
Oct  9 09:34:20 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:20 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  9 09:34:20 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  9 09:34:20 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:21 compute-1 chronyd[805]: Selected source 69.176.84.79 (pool.ntp.org)
Oct  9 09:34:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat1831674394-merged.mount: Deactivated successfully.
Oct  9 09:34:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat1831674394-lower\x2dmapped.mount: Deactivated successfully.
Oct  9 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.34041738 +0000 UTC m=+15.457803419 container create f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:36 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  9 09:34:36 compute-1 systemd[1]: Started libpod-conmon-f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50.scope.
Oct  9 09:34:36 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.400912381 +0000 UTC m=+15.518298420 container init f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.405921004 +0000 UTC m=+15.523307032 container start f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.407478059 +0000 UTC m=+15.524864088 container attach f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 09:34:36 compute-1 vigilant_dhawan[3543]: 167 167
Oct  9 09:34:36 compute-1 systemd[1]: libpod-f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50.scope: Deactivated successfully.
Oct  9 09:34:36 compute-1 conmon[3543]: conmon f560b0ef18eaa34626eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50.scope/container/memory.events
Oct  9 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.411413538 +0000 UTC m=+15.528799567 container died f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.329357149 +0000 UTC m=+15.446743178 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-64f59aba5bdcf65f69169a604275c90d8c09628f437d754203486f296b26fb31-merged.mount: Deactivated successfully.
Oct  9 09:34:36 compute-1 podman[3494]: 2025-10-09 09:34:36.427175573 +0000 UTC m=+15.544561602 container remove f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Oct  9 09:34:36 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:36 compute-1 systemd[1]: libpod-conmon-f560b0ef18eaa34626eb122cc72903ed8841d5299882c44bb506b69a73f03e50.scope: Deactivated successfully.
Oct  9 09:34:36 compute-1 podman[3565]: 2025-10-09 09:34:36.535332029 +0000 UTC m=+0.025767050 container create 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct  9 09:34:36 compute-1 systemd[1]: Started libpod-conmon-37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d.scope.
Oct  9 09:34:36 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b7db690092db157f48b25b619a56e1d25d854fd54a899790e353154bee6a36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07b7db690092db157f48b25b619a56e1d25d854fd54a899790e353154bee6a36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:36 compute-1 podman[3565]: 2025-10-09 09:34:36.578165601 +0000 UTC m=+0.068600622 container init 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Oct  9 09:34:36 compute-1 podman[3565]: 2025-10-09 09:34:36.582851505 +0000 UTC m=+0.073286527 container start 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:34:36 compute-1 podman[3565]: 2025-10-09 09:34:36.583857121 +0000 UTC m=+0.074292142 container attach 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 09:34:36 compute-1 podman[3565]: 2025-10-09 09:34:36.524774775 +0000 UTC m=+0.015209816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]: [
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:    {
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        "available": false,
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        "being_replaced": false,
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        "ceph_device_lvm": false,
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        "lsm_data": {},
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        "lvs": [],
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        "path": "/dev/sr0",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        "rejected_reasons": [
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "Insufficient space (<5GB)",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "Has a FileSystem"
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        ],
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        "sys_api": {
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "actuators": null,
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "device_nodes": [
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:                "sr0"
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            ],
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "devname": "sr0",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "human_readable_size": "474.00 KB",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "id_bus": "ata",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "model": "QEMU DVD-ROM",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "nr_requests": "64",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "parent": "/dev/sr0",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "partitions": {},
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "path": "/dev/sr0",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "removable": "1",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "rev": "2.5+",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "ro": "0",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "rotational": "0",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "sas_address": "",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "sas_device_handle": "",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "scheduler_mode": "mq-deadline",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "sectors": 0,
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "sectorsize": "2048",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "size": 485376.0,
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "support_discard": "2048",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "type": "disk",
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:            "vendor": "QEMU"
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:        }
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]:    }
Oct  9 09:34:37 compute-1 suspicious_goodall[3578]: ]
Oct  9 09:34:37 compute-1 systemd[1]: libpod-37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d.scope: Deactivated successfully.
Oct  9 09:34:37 compute-1 podman[4438]: 2025-10-09 09:34:37.118299463 +0000 UTC m=+0.018126044 container died 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:34:37 compute-1 podman[4438]: 2025-10-09 09:34:37.135118732 +0000 UTC m=+0.034945302 container remove 37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:34:37 compute-1 systemd[1]: libpod-conmon-37c0c850b4355849c3786e47bf7914260048d541e21b38d91d025e235e9ab85d.scope: Deactivated successfully.
Oct  9 09:34:37 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:38 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:38 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:38 compute-1 podman[5425]: 2025-10-09 09:34:38.938804981 +0000 UTC m=+0.026047859 container create e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:38 compute-1 systemd[1]: Started libpod-conmon-e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65.scope.
Oct  9 09:34:38 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:38 compute-1 podman[5425]: 2025-10-09 09:34:38.980159545 +0000 UTC m=+0.067402423 container init e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:38 compute-1 podman[5425]: 2025-10-09 09:34:38.984664499 +0000 UTC m=+0.071907377 container start e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct  9 09:34:38 compute-1 podman[5425]: 2025-10-09 09:34:38.9856778 +0000 UTC m=+0.072920677 container attach e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:34:38 compute-1 optimistic_lederberg[5440]: 167 167
Oct  9 09:34:38 compute-1 systemd[1]: libpod-e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65.scope: Deactivated successfully.
Oct  9 09:34:38 compute-1 conmon[5440]: conmon e8cf1ff3116329a4f379 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65.scope/container/memory.events
Oct  9 09:34:38 compute-1 podman[5425]: 2025-10-09 09:34:38.988637168 +0000 UTC m=+0.075880047 container died e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:34:39 compute-1 podman[5425]: 2025-10-09 09:34:39.005127388 +0000 UTC m=+0.092370265 container remove e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:34:39 compute-1 podman[5425]: 2025-10-09 09:34:38.928554656 +0000 UTC m=+0.015797554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:39 compute-1 systemd[1]: libpod-conmon-e8cf1ff3116329a4f3790d54efadfbb8930e695620ddc858c414e684300a0c65.scope: Deactivated successfully.
Oct  9 09:34:39 compute-1 systemd[1]: Reloading.
Oct  9 09:34:39 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:39 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:39 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:39 compute-1 systemd[1]: Reloading.
Oct  9 09:34:39 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:39 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:39 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Oct  9 09:34:39 compute-1 systemd[1]: Reloading.
Oct  9 09:34:39 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:39 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:39 compute-1 systemd[1]: Reached target Ceph cluster 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:34:39 compute-1 systemd[1]: Reloading.
Oct  9 09:34:39 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:39 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:39 compute-1 systemd[1]: Reloading.
Oct  9 09:34:39 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:39 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:40 compute-1 systemd[1]: Created slice Slice /system/ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:34:40 compute-1 systemd[1]: Reached target System Time Set.
Oct  9 09:34:40 compute-1 systemd[1]: Reached target System Time Synchronized.
Oct  9 09:34:40 compute-1 systemd[1]: Starting Ceph crash.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:34:40 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:40 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:40 compute-1 podman[5685]: 2025-10-09 09:34:40.185214568 +0000 UTC m=+0.027935948 container create cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a34cb825b3e725a1b98572d527776d2dea4d19288afb174cbe8c9fc36d3db02/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a34cb825b3e725a1b98572d527776d2dea4d19288afb174cbe8c9fc36d3db02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a34cb825b3e725a1b98572d527776d2dea4d19288afb174cbe8c9fc36d3db02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-1 podman[5685]: 2025-10-09 09:34:40.225106926 +0000 UTC m=+0.067828296 container init cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:40 compute-1 podman[5685]: 2025-10-09 09:34:40.228589261 +0000 UTC m=+0.071310631 container start cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:34:40 compute-1 bash[5685]: cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a
Oct  9 09:34:40 compute-1 podman[5685]: 2025-10-09 09:34:40.173430472 +0000 UTC m=+0.016151863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:40 compute-1 systemd[1]: Started Ceph crash.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  9 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.341+0000 7fd3e181f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  9 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.341+0000 7fd3e181f640 -1 AuthRegistry(0x7fd3dc0698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  9 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.342+0000 7fd3e181f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  9 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.342+0000 7fd3e181f640 -1 AuthRegistry(0x7fd3e181dff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  9 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.343+0000 7fd3daffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  9 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: 2025-10-09T09:34:40.343+0000 7fd3e181f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  9 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  9 09:34:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1[5697]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  9 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.625006708 +0000 UTC m=+0.024383630 container create a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:40 compute-1 systemd[1]: Started libpod-conmon-a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35.scope.
Oct  9 09:34:40 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.675454876 +0000 UTC m=+0.074831828 container init a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.679735898 +0000 UTC m=+0.079112830 container start a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.6807656 +0000 UTC m=+0.080142542 container attach a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  9 09:34:40 compute-1 busy_fermi[5808]: 167 167
Oct  9 09:34:40 compute-1 systemd[1]: libpod-a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35.scope: Deactivated successfully.
Oct  9 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.683582689 +0000 UTC m=+0.082959622 container died a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.698635298 +0000 UTC m=+0.098012231 container remove a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_fermi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:40 compute-1 podman[5795]: 2025-10-09 09:34:40.615245446 +0000 UTC m=+0.014622378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:40 compute-1 systemd[1]: libpod-conmon-a54500fb36a152c683b7244e1332c5643b56b9b3099664e7bb7f510322679a35.scope: Deactivated successfully.
Oct  9 09:34:40 compute-1 podman[5830]: 2025-10-09 09:34:40.802430751 +0000 UTC m=+0.024331313 container create 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:40 compute-1 systemd[1]: Started libpod-conmon-519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5.scope.
Oct  9 09:34:40 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:40 compute-1 podman[5830]: 2025-10-09 09:34:40.853290897 +0000 UTC m=+0.075191458 container init 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 09:34:40 compute-1 podman[5830]: 2025-10-09 09:34:40.858992126 +0000 UTC m=+0.080892687 container start 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:34:40 compute-1 podman[5830]: 2025-10-09 09:34:40.86015628 +0000 UTC m=+0.082056841 container attach 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:40 compute-1 podman[5830]: 2025-10-09 09:34:40.792421973 +0000 UTC m=+0.014322554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 6a6825df-a8f3-41ad-b7ed-1604f01d2f74
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  9 09:34:41 compute-1 lvm[5904]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:34:41 compute-1 lvm[5904]: VG ceph_vg0 finished
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: stderr: got monmap epoch 1
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: --> Creating keyring file for osd.0
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct  9 09:34:41 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 6a6825df-a8f3-41ad-b7ed-1604f01d2f74 --setuser ceph --setgroup ceph
Oct  9 09:34:44 compute-1 exciting_torvalds[5843]: stderr: 2025-10-09T09:34:41.946+0000 7f38f1678740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Oct  9 09:34:44 compute-1 exciting_torvalds[5843]: stderr: 2025-10-09T09:34:42.208+0000 7f38f1678740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct  9 09:34:44 compute-1 exciting_torvalds[5843]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  9 09:34:44 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 09:34:44 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  9 09:34:45 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:45 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:45 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:34:45 compute-1 exciting_torvalds[5843]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 09:34:45 compute-1 exciting_torvalds[5843]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  9 09:34:45 compute-1 exciting_torvalds[5843]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  9 09:34:45 compute-1 systemd[1]: libpod-519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5.scope: Deactivated successfully.
Oct  9 09:34:45 compute-1 systemd[1]: libpod-519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5.scope: Consumed 1.367s CPU time.
Oct  9 09:34:45 compute-1 podman[5830]: 2025-10-09 09:34:45.223812921 +0000 UTC m=+4.445713492 container died 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:34:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-5e8d451fd393b71d71166de3311d16999bd73f7b5d8f7d2a7ecf996590b8df0f-merged.mount: Deactivated successfully.
Oct  9 09:34:45 compute-1 podman[5830]: 2025-10-09 09:34:45.244104055 +0000 UTC m=+4.466004616 container remove 519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Oct  9 09:34:45 compute-1 systemd[1]: libpod-conmon-519d10df08d5943fd62faef942a046b22256bef0ee3b0cf03b635ab1bbfee1f5.scope: Deactivated successfully.
Oct  9 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.59559335 +0000 UTC m=+0.023209077 container create ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:34:45 compute-1 systemd[1]: Started libpod-conmon-ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c.scope.
Oct  9 09:34:45 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.643143255 +0000 UTC m=+0.070758992 container init ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct  9 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.647471766 +0000 UTC m=+0.075087492 container start ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.648399003 +0000 UTC m=+0.076014730 container attach ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  9 09:34:45 compute-1 charming_brattain[6920]: 167 167
Oct  9 09:34:45 compute-1 systemd[1]: libpod-ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c.scope: Deactivated successfully.
Oct  9 09:34:45 compute-1 conmon[6920]: conmon ec9002e816dbe69d9699 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c.scope/container/memory.events
Oct  9 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.650991992 +0000 UTC m=+0.078607719 container died ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:34:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-1d06b7f5026899de72e7403e102d1005a0547192e29bfe6b088f0a45fe3b0e12-merged.mount: Deactivated successfully.
Oct  9 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.67097728 +0000 UTC m=+0.098593007 container remove ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:45 compute-1 podman[6906]: 2025-10-09 09:34:45.585783606 +0000 UTC m=+0.013399353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:45 compute-1 systemd[1]: libpod-conmon-ec9002e816dbe69d9699d0e9111cb0059eafa6627ec7399ac5ff0ea82878aa4c.scope: Deactivated successfully.
Oct  9 09:34:45 compute-1 podman[6941]: 2025-10-09 09:34:45.77432581 +0000 UTC m=+0.024441790 container create 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  9 09:34:45 compute-1 systemd[1]: Started libpod-conmon-58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69.scope.
Oct  9 09:34:45 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271809d494a473cbaceb2c66cfb7905e3265504b4defac15ab5dde1f2c2e94d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271809d494a473cbaceb2c66cfb7905e3265504b4defac15ab5dde1f2c2e94d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271809d494a473cbaceb2c66cfb7905e3265504b4defac15ab5dde1f2c2e94d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271809d494a473cbaceb2c66cfb7905e3265504b4defac15ab5dde1f2c2e94d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:45 compute-1 podman[6941]: 2025-10-09 09:34:45.812195174 +0000 UTC m=+0.062311154 container init 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:34:45 compute-1 podman[6941]: 2025-10-09 09:34:45.817568945 +0000 UTC m=+0.067684926 container start 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 09:34:45 compute-1 podman[6941]: 2025-10-09 09:34:45.818526671 +0000 UTC m=+0.068642652 container attach 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:34:45 compute-1 podman[6941]: 2025-10-09 09:34:45.764732574 +0000 UTC m=+0.014848564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:46 compute-1 tender_joliot[6954]: {
Oct  9 09:34:46 compute-1 tender_joliot[6954]:    "0": [
Oct  9 09:34:46 compute-1 tender_joliot[6954]:        {
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "devices": [
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "/dev/loop3"
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            ],
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "lv_name": "ceph_lv0",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "lv_size": "21470642176",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=HIhrYm-2lBn-uQRn-0mXY-X1mD-O9Ex-kh1Jbh,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=6a6825df-a8f3-41ad-b7ed-1604f01d2f74,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "lv_uuid": "HIhrYm-2lBn-uQRn-0mXY-X1mD-O9Ex-kh1Jbh",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "name": "ceph_lv0",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "tags": {
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.block_uuid": "HIhrYm-2lBn-uQRn-0mXY-X1mD-O9Ex-kh1Jbh",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.cluster_name": "ceph",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.crush_device_class": "",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.encrypted": "0",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.osd_fsid": "6a6825df-a8f3-41ad-b7ed-1604f01d2f74",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.osd_id": "0",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.type": "block",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.vdo": "0",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:                "ceph.with_tpm": "0"
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            },
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "type": "block",
Oct  9 09:34:46 compute-1 tender_joliot[6954]:            "vg_name": "ceph_vg0"
Oct  9 09:34:46 compute-1 tender_joliot[6954]:        }
Oct  9 09:34:46 compute-1 tender_joliot[6954]:    ]
Oct  9 09:34:46 compute-1 tender_joliot[6954]: }
Oct  9 09:34:46 compute-1 systemd[1]: libpod-58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69.scope: Deactivated successfully.
Oct  9 09:34:46 compute-1 podman[6963]: 2025-10-09 09:34:46.067975752 +0000 UTC m=+0.014807027 container died 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 09:34:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-271809d494a473cbaceb2c66cfb7905e3265504b4defac15ab5dde1f2c2e94d8-merged.mount: Deactivated successfully.
Oct  9 09:34:46 compute-1 podman[6963]: 2025-10-09 09:34:46.085440668 +0000 UTC m=+0.032271934 container remove 58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:46 compute-1 systemd[1]: libpod-conmon-58ac78c8ec9329281c727b4b8e195798ca2b5a62fb1b69a501d32a211c1fdc69.scope: Deactivated successfully.
Oct  9 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.452353991 +0000 UTC m=+0.022414368 container create 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:46 compute-1 systemd[1]: Started libpod-conmon-370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381.scope.
Oct  9 09:34:46 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.502657377 +0000 UTC m=+0.072717754 container init 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.506814795 +0000 UTC m=+0.076875162 container start 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.507852962 +0000 UTC m=+0.077913329 container attach 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:46 compute-1 stoic_elgamal[7069]: 167 167
Oct  9 09:34:46 compute-1 systemd[1]: libpod-370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381.scope: Deactivated successfully.
Oct  9 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.509796907 +0000 UTC m=+0.079857274 container died 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:34:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-729e6b3081340c0c41de3bcc35b8f2593ae1744d26038fcb2bf5d984201db246-merged.mount: Deactivated successfully.
Oct  9 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.528197056 +0000 UTC m=+0.098257423 container remove 370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:34:46 compute-1 podman[7055]: 2025-10-09 09:34:46.443188232 +0000 UTC m=+0.013248609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:46 compute-1 systemd[1]: libpod-conmon-370c84a2a15b2fe89020e094a5e900200877994bf5355d7cbfc6b6a428a86381.scope: Deactivated successfully.
Oct  9 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.695571994 +0000 UTC m=+0.025933533 container create 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:34:46 compute-1 systemd[1]: Started libpod-conmon-3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937.scope.
Oct  9 09:34:46 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.749082246 +0000 UTC m=+0.079443785 container init 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.755259693 +0000 UTC m=+0.085621221 container start 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.756368532 +0000 UTC m=+0.086730061 container attach 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.68523149 +0000 UTC m=+0.015593038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test[7110]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct  9 09:34:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test[7110]:                            [--no-systemd] [--no-tmpfs]
Oct  9 09:34:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test[7110]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  9 09:34:46 compute-1 systemd[1]: libpod-3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937.scope: Deactivated successfully.
Oct  9 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.902796219 +0000 UTC m=+0.233157749 container died 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:34:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-36fc6a6a30d5c8a113d0b2d356baf8d3e8af9207ac825af9943000f6fa6fc134-merged.mount: Deactivated successfully.
Oct  9 09:34:46 compute-1 podman[7097]: 2025-10-09 09:34:46.924130861 +0000 UTC m=+0.254492389 container remove 3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate-test, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:46 compute-1 systemd[1]: libpod-conmon-3afbde2936d035bbb68e84053816dd9e066cc1f41adfa0c394df560dbaf50937.scope: Deactivated successfully.
Oct  9 09:34:47 compute-1 systemd[1]: Reloading.
Oct  9 09:34:47 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:47 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:47 compute-1 systemd[1]: Reloading.
Oct  9 09:34:47 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:34:47 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:34:47 compute-1 systemd[1]: Starting Ceph osd.0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:34:47 compute-1 podman[7259]: 2025-10-09 09:34:47.623795002 +0000 UTC m=+0.025790483 container create a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  9 09:34:47 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:47 compute-1 podman[7259]: 2025-10-09 09:34:47.664835404 +0000 UTC m=+0.066830875 container init a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  9 09:34:47 compute-1 podman[7259]: 2025-10-09 09:34:47.670682648 +0000 UTC m=+0.072678119 container start a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 09:34:47 compute-1 podman[7259]: 2025-10-09 09:34:47.672007856 +0000 UTC m=+0.074003327 container attach a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 09:34:47 compute-1 podman[7259]: 2025-10-09 09:34:47.613037491 +0000 UTC m=+0.015032982 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:47 compute-1 bash[7259]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:47 compute-1 bash[7259]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:48 compute-1 lvm[7353]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:34:48 compute-1 lvm[7353]: VG ceph_vg0 finished
Oct  9 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct  9 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:48 compute-1 bash[7259]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct  9 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:48 compute-1 lvm[7357]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:34:48 compute-1 lvm[7357]: VG ceph_vg0 finished
Oct  9 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  9 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  9 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 09:34:48 compute-1 bash[7259]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  9 09:34:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate[7271]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  9 09:34:48 compute-1 bash[7259]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  9 09:34:48 compute-1 systemd[1]: libpod-a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c.scope: Deactivated successfully.
Oct  9 09:34:48 compute-1 podman[7259]: 2025-10-09 09:34:48.591464046 +0000 UTC m=+0.993459527 container died a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-152a94c57c5918b34c7a0ce1075b979e5983d3a9e3a27148082e03f5573b52c8-merged.mount: Deactivated successfully.
Oct  9 09:34:48 compute-1 podman[7259]: 2025-10-09 09:34:48.612910358 +0000 UTC m=+1.014905829 container remove a86682fb28e2a9707eeb2ff42eca9f102fa39ae6a9cee571bee3a85f70445d5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:34:48 compute-1 podman[7497]: 2025-10-09 09:34:48.746618244 +0000 UTC m=+0.025384889 container create de046c66ba96a3549bd259ccfa5eb6fa1b5cdd3b076566d5ad43e142eefce08a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct  9 09:34:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649690acaaa59aac072f3cff89da8667629825fe669a07fc59cdcc20a1865138/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649690acaaa59aac072f3cff89da8667629825fe669a07fc59cdcc20a1865138/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649690acaaa59aac072f3cff89da8667629825fe669a07fc59cdcc20a1865138/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649690acaaa59aac072f3cff89da8667629825fe669a07fc59cdcc20a1865138/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649690acaaa59aac072f3cff89da8667629825fe669a07fc59cdcc20a1865138/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:48 compute-1 podman[7497]: 2025-10-09 09:34:48.788129364 +0000 UTC m=+0.066896028 container init de046c66ba96a3549bd259ccfa5eb6fa1b5cdd3b076566d5ad43e142eefce08a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:34:48 compute-1 podman[7497]: 2025-10-09 09:34:48.792006023 +0000 UTC m=+0.070772667 container start de046c66ba96a3549bd259ccfa5eb6fa1b5cdd3b076566d5ad43e142eefce08a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 09:34:48 compute-1 bash[7497]: de046c66ba96a3549bd259ccfa5eb6fa1b5cdd3b076566d5ad43e142eefce08a
Oct  9 09:34:48 compute-1 podman[7497]: 2025-10-09 09:34:48.736069497 +0000 UTC m=+0.014836162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:48 compute-1 systemd[1]: Started Ceph osd.0 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:34:48 compute-1 ceph-osd[7514]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:34:48 compute-1 ceph-osd[7514]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct  9 09:34:48 compute-1 ceph-osd[7514]: pidfile_write: ignore empty --pid-file
Oct  9 09:34:48 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:48 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:48 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:48 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.187536097 +0000 UTC m=+0.028960078 container create cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:49 compute-1 systemd[1]: Started libpod-conmon-cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071.scope.
Oct  9 09:34:49 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.23921702 +0000 UTC m=+0.080641001 container init cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct  9 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.243887063 +0000 UTC m=+0.085311045 container start cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.244932074 +0000 UTC m=+0.086356075 container attach cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:34:49 compute-1 mystifying_jones[7621]: 167 167
Oct  9 09:34:49 compute-1 systemd[1]: libpod-cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071.scope: Deactivated successfully.
Oct  9 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.247854805 +0000 UTC m=+0.089278785 container died cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:34:49 compute-1 systemd[1]: var-lib-containers-storage-overlay-cb1bd102f2a270647ee9097340e3e70733f6d9f4c0afb09c0c3dac7d5e116e8f-merged.mount: Deactivated successfully.
Oct  9 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.264988525 +0000 UTC m=+0.106412507 container remove cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_jones, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:34:49 compute-1 podman[7608]: 2025-10-09 09:34:49.173442586 +0000 UTC m=+0.014866588 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:49 compute-1 systemd[1]: libpod-conmon-cc32d3d3291f79b3fc0b1478050e3679655389bf5de8b60f1eeb401a4b4e6071.scope: Deactivated successfully.
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.377733181 +0000 UTC m=+0.030185810 container create 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 09:34:49 compute-1 systemd[1]: Started libpod-conmon-1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d.scope.
Oct  9 09:34:49 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7fa6941b84fe7290ca329d5608d3714a6831e3fb0c027ac0511962e67f88e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7fa6941b84fe7290ca329d5608d3714a6831e3fb0c027ac0511962e67f88e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7fa6941b84fe7290ca329d5608d3714a6831e3fb0c027ac0511962e67f88e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a7fa6941b84fe7290ca329d5608d3714a6831e3fb0c027ac0511962e67f88e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.431953851 +0000 UTC m=+0.084406481 container init 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.437839758 +0000 UTC m=+0.090292387 container start 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.438830487 +0000 UTC m=+0.091283115 container attach 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.363583525 +0000 UTC m=+0.016036174 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:49 compute-1 lvm[7736]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:34:49 compute-1 lvm[7736]: VG ceph_vg0 finished
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991ddc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991ddc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991ddc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  9 09:34:49 compute-1 ceph-osd[7514]: bdev(0x560c991ddc00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:49 compute-1 focused_chebyshev[7658]: {}
Oct  9 09:34:49 compute-1 systemd[1]: libpod-1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d.scope: Deactivated successfully.
Oct  9 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.94224911 +0000 UTC m=+0.594701739 container died 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:49 compute-1 systemd[1]: var-lib-containers-storage-overlay-0a7fa6941b84fe7290ca329d5608d3714a6831e3fb0c027ac0511962e67f88e7-merged.mount: Deactivated successfully.
Oct  9 09:34:49 compute-1 podman[7642]: 2025-10-09 09:34:49.963175882 +0000 UTC m=+0.615628512 container remove 1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_chebyshev, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:34:49 compute-1 systemd[1]: libpod-conmon-1c40ebf4c3ad928f697ce45ef57d72082a4f343e001fe5004130d95da316343d.scope: Deactivated successfully.
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c991dd800 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:50 compute-1 ceph-osd[7514]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct  9 09:34:50 compute-1 ceph-osd[7514]: load: jerasure load: lrc 
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:50 compute-1 podman[7891]: 2025-10-09 09:34:50.606731452 +0000 UTC m=+0.039603475 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 09:34:50 compute-1 podman[7891]: 2025-10-09 09:34:50.681931385 +0000 UTC m=+0.114803397 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:50 compute-1 ceph-osd[7514]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  9 09:34:50 compute-1 ceph-osd[7514]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:50 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.1322869 +0000 UTC m=+0.026035595 container create ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 09:34:51 compute-1 systemd[1]: Started libpod-conmon-ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d.scope.
Oct  9 09:34:51 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.183912769 +0000 UTC m=+0.077661464 container init ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.188732445 +0000 UTC m=+0.082481140 container start ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.189752459 +0000 UTC m=+0.083501154 container attach ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct  9 09:34:51 compute-1 inspiring_mcnulty[8040]: 167 167
Oct  9 09:34:51 compute-1 systemd[1]: libpod-ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d.scope: Deactivated successfully.
Oct  9 09:34:51 compute-1 conmon[8040]: conmon ae3b8848b5cafb38fd59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d.scope/container/memory.events
Oct  9 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.192199332 +0000 UTC m=+0.085948037 container died ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-aa8e0e53c9586cba66bc90c8215558cf6fbd4dc0eaf9fd50967f425d020ddaaf-merged.mount: Deactivated successfully.
Oct  9 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.211460003 +0000 UTC m=+0.105208699 container remove ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  9 09:34:51 compute-1 podman[8028]: 2025-10-09 09:34:51.121325896 +0000 UTC m=+0.015074601 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:51 compute-1 systemd[1]: libpod-conmon-ae3b8848b5cafb38fd59435931046d6aa794b0b5f7f8e6a506a4325018a43b2d.scope: Deactivated successfully.
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a078c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount shared_bdev_used = 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: RocksDB version: 7.9.2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Git sha 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DB SUMMARY
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DB Session ID:  UVT2D1S8UT2VTLEPFV4T
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: CURRENT file:  CURRENT
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.error_if_exists: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.create_if_missing: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                     Options.env: 0x560c9a049dc0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                Options.info_log: 0x560c9a04d7a0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.statistics: (nil)
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.use_fsync: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.db_log_dir: 
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                 Options.wal_dir: db.wal
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.write_buffer_manager: 0x560c9a144a00
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.unordered_write: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.row_cache: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.wal_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.two_write_queues: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.wal_compression: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.atomic_flush: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_background_jobs: 4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_background_compactions: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_subcompactions: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.max_open_files: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Compression algorithms supported:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kZSTD supported: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kXpressCompression supported: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kZlibCompression supported: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c992729b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c992729b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04db80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c992729b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ade99e4d-7871-44b8-bb7f-d40708f63a2b
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491315992, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491316139, "job": 1, "event": "recovery_finished"}
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: freelist init
Oct  9 09:34:51 compute-1 ceph-osd[7514]: freelist _read_cfg
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs umount
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) close
Oct  9 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.33540254 +0000 UTC m=+0.029316260 container create fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:34:51 compute-1 systemd[1]: Started libpod-conmon-fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed.scope.
Oct  9 09:34:51 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:34:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec0050036ae73e5b5c58f108c45f80369655d5abe1ad663e9569be2a03b6779/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec0050036ae73e5b5c58f108c45f80369655d5abe1ad663e9569be2a03b6779/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec0050036ae73e5b5c58f108c45f80369655d5abe1ad663e9569be2a03b6779/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec0050036ae73e5b5c58f108c45f80369655d5abe1ad663e9569be2a03b6779/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.392469897 +0000 UTC m=+0.086383627 container init fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.397578829 +0000 UTC m=+0.091492548 container start fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.398565489 +0000 UTC m=+0.092479208 container attach fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  9 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.324333742 +0000 UTC m=+0.018247482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bdev(0x560c9a079000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluefs mount shared_bdev_used = 4718592
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: RocksDB version: 7.9.2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Git sha 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DB SUMMARY
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DB Session ID:  UVT2D1S8UT2VTLEPFV4S
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: CURRENT file:  CURRENT
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.error_if_exists: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.create_if_missing: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                     Options.env: 0x560c9a1e82a0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                Options.info_log: 0x560c9a04d920
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.statistics: (nil)
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.use_fsync: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.db_log_dir: 
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                                 Options.wal_dir: db.wal
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.write_buffer_manager: 0x560c9a144a00
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.unordered_write: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.row_cache: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                              Options.wal_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.two_write_queues: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.wal_compression: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.atomic_flush: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_background_jobs: 4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_background_compactions: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_subcompactions: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.max_open_files: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Compression algorithms supported:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kZSTD supported: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kXpressCompression supported: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kZlibCompression supported: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04d680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c99273350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04dac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c992729b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04dac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c992729b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:           Options.merge_operator: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c9a04dac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560c992729b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.compression: LZ4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.num_levels: 7
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ade99e4d-7871-44b8-bb7f-d40708f63a2b
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491585542, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491591229, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002491, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ade99e4d-7871-44b8-bb7f-d40708f63a2b", "db_session_id": "UVT2D1S8UT2VTLEPFV4S", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491592329, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1599, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 473, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002491, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ade99e4d-7871-44b8-bb7f-d40708f63a2b", "db_session_id": "UVT2D1S8UT2VTLEPFV4S", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491593379, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002491, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ade99e4d-7871-44b8-bb7f-d40708f63a2b", "db_session_id": "UVT2D1S8UT2VTLEPFV4S", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002491593827, "job": 1, "event": "recovery_finished"}
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560c9a214000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: DB pointer 0x560c9a1f4000
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct  9 09:34:51 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:34:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct  9 09:34:51 compute-1 ceph-osd[7514]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  9 09:34:51 compute-1 ceph-osd[7514]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  9 09:34:51 compute-1 ceph-osd[7514]: _get_class not permitted to load lua
Oct  9 09:34:51 compute-1 ceph-osd[7514]: _get_class not permitted to load sdk
Oct  9 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  9 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  9 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  9 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  9 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 load_pgs
Oct  9 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 load_pgs opened 0 pgs
Oct  9 09:34:51 compute-1 ceph-osd[7514]: osd.0 0 log_to_monitors true
Oct  9 09:34:51 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0[7510]: 2025-10-09T09:34:51.608+0000 7f027f093740 -1 osd.0 0 log_to_monitors true
Oct  9 09:34:51 compute-1 youthful_kirch[8267]: [
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:    {
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        "available": false,
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        "being_replaced": false,
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        "ceph_device_lvm": false,
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        "lsm_data": {},
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        "lvs": [],
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        "path": "/dev/sr0",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        "rejected_reasons": [
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "Insufficient space (<5GB)",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "Has a FileSystem"
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        ],
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        "sys_api": {
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "actuators": null,
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "device_nodes": [
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:                "sr0"
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            ],
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "devname": "sr0",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "human_readable_size": "474.00 KB",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "id_bus": "ata",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "model": "QEMU DVD-ROM",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "nr_requests": "64",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "parent": "/dev/sr0",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "partitions": {},
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "path": "/dev/sr0",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "removable": "1",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "rev": "2.5+",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "ro": "0",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "rotational": "0",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "sas_address": "",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "sas_device_handle": "",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "scheduler_mode": "mq-deadline",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "sectors": 0,
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "sectorsize": "2048",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "size": 485376.0,
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "support_discard": "2048",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "type": "disk",
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:            "vendor": "QEMU"
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:        }
Oct  9 09:34:51 compute-1 youthful_kirch[8267]:    }
Oct  9 09:34:51 compute-1 youthful_kirch[8267]: ]
Oct  9 09:34:51 compute-1 systemd[1]: libpod-fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed.scope: Deactivated successfully.
Oct  9 09:34:51 compute-1 conmon[8267]: conmon fdfcc004bc91a1ffeeff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed.scope/container/memory.events
Oct  9 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.827497454 +0000 UTC m=+0.521411175 container died fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:34:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-4ec0050036ae73e5b5c58f108c45f80369655d5abe1ad663e9569be2a03b6779-merged.mount: Deactivated successfully.
Oct  9 09:34:51 compute-1 podman[8074]: 2025-10-09 09:34:51.850446399 +0000 UTC m=+0.544360120 container remove fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_kirch, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:34:51 compute-1 systemd[1]: libpod-conmon-fdfcc004bc91a1ffeeff9c690caaad9faf18bbd9b2cc832e8cbfa470160026ed.scope: Deactivated successfully.
Oct  9 09:34:52 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  9 09:34:52 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  9 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 done with init, starting boot process
Oct  9 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 start_boot
Oct  9 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  9 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  9 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  9 09:34:53 compute-1 ceph-osd[7514]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  9 09:34:53 compute-1 ceph-osd[7514]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 44.690 iops: 11440.698 elapsed_sec: 0.262
Oct  9 09:34:54 compute-1 ceph-osd[7514]: log_channel(cluster) log [WRN] : OSD bench result of 11440.697696 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 0 waiting for initial osdmap
Oct  9 09:34:54 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0[7510]: 2025-10-09T09:34:54.308+0000 7f027b016640 -1 osd.0 0 waiting for initial osdmap
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 check_osdmap_features require_osd_release unknown -> squid
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  9 09:34:54 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-0[7510]: 2025-10-09T09:34:54.334+0000 7f027663e640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 set_numa_affinity not setting numa affinity
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 9 state: booting -> active
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  9 09:34:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 9 pg[1.0( empty local-lis/les=0/0 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:34:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 10 pg[2.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:34:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=9/10 n=0 ec=9/9 lis/c=0/0 les/c/f=0/0/0 sis=9) [0] r=0 lpr=9 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:34:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 11 pg[2.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 15 pg[7.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=10/10 les/c/f=11/11/0 sis=16 pruub=10.971442223s) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active pruub 20.894191742s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 16 pg[7.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=10/10 les/c/f=11/11/0 sis=16 pruub=10.971442223s) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown pruub 20.894191742s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1f( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1d( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1c( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1e( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1b( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.a( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.9( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.8( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.7( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.6( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.4( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.2( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.5( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.3( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.b( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.c( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.d( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.e( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.f( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.10( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.11( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.12( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.13( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.14( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.15( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.16( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.17( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.18( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.19( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1a( empty local-lis/les=10/11 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.a( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.7( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.9( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.6( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.4( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.5( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.0( empty local-lis/les=16/17 n=0 ec=10/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.3( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.2( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.8( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.10( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.11( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.12( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.14( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.13( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.15( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.16( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.17( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.18( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.19( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.1a( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 17 pg[2.f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=10/10 les/c/f=11/11/0 sis=16) [0] r=0 lpr=16 pi=[10,16)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:35:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct  9 09:35:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct  9 09:35:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct  9 09:35:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct  9 09:35:04 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Oct  9 09:35:04 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Oct  9 09:35:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct  9 09:35:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct  9 09:35:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Oct  9 09:35:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Oct  9 09:35:07 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct  9 09:35:07 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940311432s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937086105s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940385818s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937189102s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940285683s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937086105s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944421768s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941232681s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940368652s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937189102s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944404602s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941232681s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.a( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940256119s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937124252s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.a( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940241814s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937124252s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.9( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940279007s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937196732s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.9( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940266609s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937196732s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.6( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940261841s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937210083s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.6( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940230370s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937210083s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.4( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940224648s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.937221527s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.4( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.940214157s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.937221527s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944583893s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941667557s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944221497s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941305161s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944576263s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941667557s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944540977s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941642761s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944530487s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941642761s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944205284s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941354752s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.e( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944196701s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941354752s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.10( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944177628s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941366196s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.10( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944170952s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941366196s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.13( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944331169s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941562653s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.13( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944323540s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941562653s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.15( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944312096s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941577911s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.15( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944303513s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941577911s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.19( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944275856s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 active pruub 26.941610336s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.19( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944268227s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941610336s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 22 pg[2.1( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=22 pruub=9.944208145s) [1] r=-1 lpr=22 pi=[16,22)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.941305161s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:35:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct  9 09:35:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct  9 09:35:09 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct  9 09:35:09 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct  9 09:35:10 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct  9 09:35:10 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct  9 09:35:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Oct  9 09:35:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Oct  9 09:35:12 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Oct  9 09:35:12 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Oct  9 09:35:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct  9 09:35:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct  9 09:35:14 compute-1 systemd[1268]: Starting Mark boot as successful...
Oct  9 09:35:14 compute-1 systemd[1268]: Finished Mark boot as successful.
Oct  9 09:35:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct  9 09:35:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct  9 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.722855854 +0000 UTC m=+0.023787858 container create 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 09:35:14 compute-1 systemd[1]: Started libpod-conmon-8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26.scope.
Oct  9 09:35:14 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.779591416 +0000 UTC m=+0.080523441 container init 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.783745228 +0000 UTC m=+0.084677222 container start 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:14 compute-1 keen_gauss[9599]: 167 167
Oct  9 09:35:14 compute-1 systemd[1]: libpod-8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26.scope: Deactivated successfully.
Oct  9 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.787191224 +0000 UTC m=+0.088123238 container attach 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.788083196 +0000 UTC m=+0.089015190 container died 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  9 09:35:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-b4fc8d1e1f4e46a0ccfa57819bb46e04067713be6e09287591b3805bb535613e-merged.mount: Deactivated successfully.
Oct  9 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.80888345 +0000 UTC m=+0.109815434 container remove 8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:14 compute-1 podman[9586]: 2025-10-09 09:35:14.712726699 +0000 UTC m=+0.013658713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:14 compute-1 systemd[1]: libpod-conmon-8999a076b53a3caca664e8202b8f229033a1cd26fd4e8a5ffeec34422525ea26.scope: Deactivated successfully.
Oct  9 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.853003388 +0000 UTC m=+0.031096847 container create 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:14 compute-1 systemd[1]: Started libpod-conmon-76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d.scope.
Oct  9 09:35:14 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:35:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e211ccb6c49ec1e466084a10ec21dfa556a017e8f7a4c43c6ebfc877477f711/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e211ccb6c49ec1e466084a10ec21dfa556a017e8f7a4c43c6ebfc877477f711/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e211ccb6c49ec1e466084a10ec21dfa556a017e8f7a4c43c6ebfc877477f711/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e211ccb6c49ec1e466084a10ec21dfa556a017e8f7a4c43c6ebfc877477f711/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.907820343 +0000 UTC m=+0.085913801 container init 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.911634484 +0000 UTC m=+0.089727942 container start 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.912615343 +0000 UTC m=+0.090708801 container attach 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.8366405 +0000 UTC m=+0.014733978 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:14 compute-1 systemd[1]: libpod-76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d.scope: Deactivated successfully.
Oct  9 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.95088407 +0000 UTC m=+0.128977528 container died 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-6e211ccb6c49ec1e466084a10ec21dfa556a017e8f7a4c43c6ebfc877477f711-merged.mount: Deactivated successfully.
Oct  9 09:35:14 compute-1 podman[9613]: 2025-10-09 09:35:14.967116624 +0000 UTC m=+0.145210082 container remove 76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:14 compute-1 systemd[1]: libpod-conmon-76ef7e4b8d2031715adc9a3cc91a4374e9e2fc489b315e90fc772a0bc4251a9d.scope: Deactivated successfully.
Oct  9 09:35:14 compute-1 systemd[1]: Reloading.
Oct  9 09:35:15 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:15 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:15 compute-1 systemd[1]: Reloading.
Oct  9 09:35:15 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:15 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:15 compute-1 systemd[1]: Starting Ceph mon.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:35:15 compute-1 podman[9779]: 2025-10-09 09:35:15.504576114 +0000 UTC m=+0.025394608 container create e3c4abd37c3ede7431f896d3dc6226c8674cda33134f769dc780272f31a2cc63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b7968448a1334a01368ec30e24351dca9a9e498f66aa3977a5fed5ff6cb6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b7968448a1334a01368ec30e24351dca9a9e498f66aa3977a5fed5ff6cb6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b7968448a1334a01368ec30e24351dca9a9e498f66aa3977a5fed5ff6cb6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b7968448a1334a01368ec30e24351dca9a9e498f66aa3977a5fed5ff6cb6/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:15 compute-1 podman[9779]: 2025-10-09 09:35:15.549099773 +0000 UTC m=+0.069918267 container init e3c4abd37c3ede7431f896d3dc6226c8674cda33134f769dc780272f31a2cc63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:15 compute-1 podman[9779]: 2025-10-09 09:35:15.553065839 +0000 UTC m=+0.073884324 container start e3c4abd37c3ede7431f896d3dc6226c8674cda33134f769dc780272f31a2cc63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-1, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:15 compute-1 bash[9779]: e3c4abd37c3ede7431f896d3dc6226c8674cda33134f769dc780272f31a2cc63
Oct  9 09:35:15 compute-1 podman[9779]: 2025-10-09 09:35:15.494476773 +0000 UTC m=+0.015295267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:15 compute-1 systemd[1]: Started Ceph mon.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:35:15 compute-1 ceph-mon[9795]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:35:15 compute-1 ceph-mon[9795]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:15 compute-1 ceph-mon[9795]: load: jerasure load: lrc 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: RocksDB version: 7.9.2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Git sha 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: DB SUMMARY
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: DB Session ID:  M9CZJU0HKVV71NP1SGV8
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: CURRENT file:  CURRENT
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                         Options.error_if_exists: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.create_if_missing: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                                     Options.env: 0x55e4b3b9dc20
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                                Options.info_log: 0x55e4b559fa20
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                              Options.statistics: (nil)
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                               Options.use_fsync: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                              Options.db_log_dir: 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                                 Options.wal_dir: 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                    Options.write_buffer_manager: 0x55e4b55a3900
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.unordered_write: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                               Options.row_cache: None
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                              Options.wal_filter: None
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.two_write_queues: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.wal_compression: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.atomic_flush: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.max_background_jobs: 2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.max_background_compactions: -1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.max_subcompactions: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.max_total_wal_size: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                          Options.max_open_files: -1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:       Options.compaction_readahead_size: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Compression algorithms supported:
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: #011kZSTD supported: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: #011kXpressCompression supported: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: #011kZlibCompression supported: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:           Options.merge_operator: 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:        Options.compaction_filter: None
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e4b559f6a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55e4b55c29b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:        Options.write_buffer_size: 33554432
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:  Options.max_write_buffer_number: 2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.compression: NoCompression
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.num_levels: 7
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 94a5d839-0858-4e7b-94a4-0a54b15338db
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002515584213, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002515585012, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002515585086, "job": 1, "event": "recovery_finished"}
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e4b55c4e00
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: DB pointer 0x55e4b55d4000
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:35:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e4b55c29b0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 09:35:15 compute-1 ceph-mon[9795]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Oct  9 09:35:15 compute-1 ceph-mon[9795]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(???) e0 preinit fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:35:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).mds e1 new map
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).mds e1 print_map#012e1#012btime 2025-10-09T09:33:39:705322+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e23 crush map has features 3314933000852226048, adjusting msgr requires
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Adjusting osd_memory_target on compute-1 to  5248M
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Adjusting osd_memory_target on compute-0 to 128.5M
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891] boot
Oct  9 09:35:15 compute-1 ceph-mon[9795]: OSD bench result of 25996.309425 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: OSD bench result of 11440.697696 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284] boot
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3070980083' entity='client.admin' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Saving service ingress.rgw.default spec with placement count:2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Saving service node-exporter spec with placement *
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Saving service grafana spec with placement compute-0;count:1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Saving service prometheus spec with placement compute-0;count:1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Saving service alertmanager spec with placement compute-0;count:1
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Deploying daemon mon.compute-2 on compute-2
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2266537364' entity='client.admin' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3921635866' entity='client.admin' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  9 09:35:15 compute-1 ceph-mon[9795]: Cluster is now healthy
Oct  9 09:35:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/4272592449' entity='client.admin' 
Oct  9 09:35:15 compute-1 ceph-mon[9795]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct  9 09:35:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Oct  9 09:35:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.11 deep-scrub starts
Oct  9 09:35:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.11 deep-scrub ok
Oct  9 09:35:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Oct  9 09:35:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Oct  9 09:35:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct  9 09:35:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct  9 09:35:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct  9 09:35:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct  9 09:35:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Oct  9 09:35:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Oct  9 09:35:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct  9 09:35:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct  9 09:35:21 compute-1 ceph-mon[9795]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Oct  9 09:35:21 compute-1 ceph-mon[9795]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Oct  9 09:35:21 compute-1 ceph-mon[9795]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  9 09:35:21 compute-1 ceph-mon[9795]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct  9 09:35:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct  9 09:35:24 compute-1 ceph-mon[9795]: Deploying daemon mon.compute-1 on compute-1
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-0 calling monitor election
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-2 calling monitor election
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  9 09:35:24 compute-1 ceph-mon[9795]: overall HEALTH_OK
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865152,os=Linux}
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-0 calling monitor election
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-2 calling monitor election
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-1 calling monitor election
Oct  9 09:35:24 compute-1 ceph-mon[9795]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  9 09:35:24 compute-1 ceph-mon[9795]: overall HEALTH_OK
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:35:24 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 09:35:25 compute-1 podman[9918]: 2025-10-09 09:35:25.058670306 +0000 UTC m=+0.027135038 container create 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:25 compute-1 systemd[1]: Started libpod-conmon-54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96.scope.
Oct  9 09:35:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:25 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:35:25 compute-1 podman[9918]: 2025-10-09 09:35:25.118532702 +0000 UTC m=+0.086997424 container init 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 09:35:25 compute-1 podman[9918]: 2025-10-09 09:35:25.124459005 +0000 UTC m=+0.092923728 container start 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:35:25 compute-1 podman[9918]: 2025-10-09 09:35:25.125494367 +0000 UTC m=+0.093959089 container attach 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:25 compute-1 crazy_wright[9932]: 167 167
Oct  9 09:35:25 compute-1 systemd[1]: libpod-54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96.scope: Deactivated successfully.
Oct  9 09:35:25 compute-1 conmon[9932]: conmon 54909601e7864ba18209 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96.scope/container/memory.events
Oct  9 09:35:25 compute-1 podman[9918]: 2025-10-09 09:35:25.047216512 +0000 UTC m=+0.015681254 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:25 compute-1 podman[9937]: 2025-10-09 09:35:25.161531758 +0000 UTC m=+0.020449274 container died 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:35:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-c9fc58fbf18e0c9da5c2f48c9bebdb4ab964b8b24621bdb98b7776ba9001661c-merged.mount: Deactivated successfully.
Oct  9 09:35:25 compute-1 podman[9937]: 2025-10-09 09:35:25.180144968 +0000 UTC m=+0.039062465 container remove 54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:25 compute-1 systemd[1]: libpod-conmon-54909601e7864ba182099af64bb464a40c9bac7ce0567541100cc5bb40b25d96.scope: Deactivated successfully.
Oct  9 09:35:25 compute-1 systemd[1]: Reloading.
Oct  9 09:35:25 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:25 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:25 compute-1 systemd[1]: Reloading.
Oct  9 09:35:25 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:25 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:25 compute-1 python3[10012]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e23 _set_new_cache_sizes cache_size:1019937216 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:25 compute-1 systemd[1]: Starting Ceph mgr.compute-1.etokpp for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:35:25 compute-1 podman[10100]: 2025-10-09 09:35:25.850762809 +0000 UTC m=+0.028731478 container create d27f3e957991263543395e3774a5a0d39a40a8e12d215b4fd0d84b8e79139206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  9 09:35:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7ddd87911cf4d42632d5a95b4fb7601b3eb4efb6b31c90ea451bf643e3236c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7ddd87911cf4d42632d5a95b4fb7601b3eb4efb6b31c90ea451bf643e3236c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7ddd87911cf4d42632d5a95b4fb7601b3eb4efb6b31c90ea451bf643e3236c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a7ddd87911cf4d42632d5a95b4fb7601b3eb4efb6b31c90ea451bf643e3236c/merged/var/lib/ceph/mgr/ceph-compute-1.etokpp supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:25 compute-1 podman[10100]: 2025-10-09 09:35:25.901122441 +0000 UTC m=+0.079091130 container init d27f3e957991263543395e3774a5a0d39a40a8e12d215b4fd0d84b8e79139206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:25 compute-1 podman[10100]: 2025-10-09 09:35:25.905621383 +0000 UTC m=+0.083590052 container start d27f3e957991263543395e3774a5a0d39a40a8e12d215b4fd0d84b8e79139206 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:25 compute-1 bash[10100]: d27f3e957991263543395e3774a5a0d39a40a8e12d215b4fd0d84b8e79139206
Oct  9 09:35:25 compute-1 podman[10100]: 2025-10-09 09:35:25.838722099 +0000 UTC m=+0.016690788 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:25 compute-1 systemd[1]: Started Ceph mgr.compute-1.etokpp for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:35:25 compute-1 ceph-mgr[10116]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:35:25 compute-1 ceph-mgr[10116]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:25 compute-1 ceph-mgr[10116]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:25 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:26.054+0000 7f5bb9ecc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:26.125+0000 7f5bb9ecc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:26 compute-1 ceph-mon[9795]: Deploying daemon mgr.compute-1.etokpp on compute-1
Oct  9 09:35:26 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3098806995' entity='client.admin' 
Oct  9 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:35:26 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'crash'
Oct  9 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:26.817+0000 7f5bb9ecc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:27 compute-1 ceph-mon[9795]: Deploying daemon crash.compute-2 on compute-2
Oct  9 09:35:27 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/2874472706' entity='client.admin' 
Oct  9 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:35:27 compute-1 ceph-mon[9795]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:27.366+0000 7f5bb9ecc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:27.512+0000 7f5bb9ecc140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'influx'
Oct  9 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:27.576+0000 7f5bb9ecc140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'insights'
Oct  9 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:27.697+0000 7f5bb9ecc140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:27 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:28 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct  9 09:35:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3618703096' entity='client.admin' 
Oct  9 09:35:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 09:35:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]: dispatch
Oct  9 09:35:28 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]': finished
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:28.578+0000 7f5bb9ecc140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:28.768+0000 7f5bb9ecc140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:28.835+0000 7f5bb9ecc140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:28.893+0000 7f5bb9ecc140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:28.963+0000 7f5bb9ecc140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'progress'
Oct  9 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:29.025+0000 7f5bb9ecc140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:29.324+0000 7f5bb9ecc140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:29.409+0000 7f5bb9ecc140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'restful'
Oct  9 09:35:29 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 09:35:29 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:29 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd[1]: session-16.scope: Consumed 44.024s CPU time.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 7 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 16 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 10 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 15 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 14 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 11 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 7.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 8 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 9 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd[1]: session-6.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd[1]: session-4.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 6 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 4 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 12 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 16.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 10.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 15.
Oct  9 09:35:29 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 14.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Session 13 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 11.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 8.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 9.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 6.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 4.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 12.
Oct  9 09:35:29 compute-1 systemd-logind[798]: Removed session 13.
Oct  9 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:29.812+0000 7f5bb9ecc140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rook'
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.304+0000 7f5bb9ecc140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.368+0000 7f5bb9ecc140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.438+0000 7f5bb9ecc140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'stats'
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'status'
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.570+0000 7f5bb9ecc140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:30 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 09:35:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e24 _set_new_cache_sizes cache_size:1020053218 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.633+0000 7f5bb9ecc140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.768+0000 7f5bb9ecc140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:30.964+0000 7f5bb9ecc140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:31.196+0000 7f5bb9ecc140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:31.258+0000 7f5bb9ecc140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: ms_deliver_dispatch: unhandled message 0x560697008d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setuser ceph since I am not root
Oct  9 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setgroup ceph since I am not root
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:31.430+0000 7f6ea8e3c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:31.501+0000 7f6ea8e3c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'crash'
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:32.175+0000 7f6ea8e3c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:32.720+0000 7f6ea8e3c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:32.861+0000 7f6ea8e3c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'influx'
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'insights'
Oct  9 09:35:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:32.923+0000 7f6ea8e3c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:33 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:33.043+0000 7f6ea8e3c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:33.947+0000 7f6ea8e3c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.135+0000 7f6ea8e3c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.201+0000 7f6ea8e3c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.258+0000 7f6ea8e3c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.327+0000 7f6ea8e3c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'progress'
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.388+0000 7f6ea8e3c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.688+0000 7f6ea8e3c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:34.772+0000 7f6ea8e3c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'restful'
Oct  9 09:35:34 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct  9 09:35:35 compute-1 ceph-mon[9795]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:35 compute-1 ceph-mon[9795]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:35 compute-1 ceph-mon[9795]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.158+0000 7f6ea8e3c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rook'
Oct  9 09:35:35 compute-1 systemd-logind[798]: New session 17 of user ceph-admin.
Oct  9 09:35:35 compute-1 systemd[1]: Started Session 17 of User ceph-admin.
Oct  9 09:35:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e25 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.643+0000 7f6ea8e3c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.705+0000 7f6ea8e3c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.775+0000 7f6ea8e3c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'stats'
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'status'
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.902+0000 7f6ea8e3c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:35.963+0000 7f6ea8e3c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:35 compute-1 podman[10289]: 2025-10-09 09:35:35.9963 +0000 UTC m=+0.038307060 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct  9 09:35:36 compute-1 podman[10289]: 2025-10-09 09:35:36.078954245 +0000 UTC m=+0.120961284 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:35:36 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:35:36 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:35:36 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:36.099+0000 7f6ea8e3c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:36.289+0000 7f6ea8e3c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:36.519+0000 7f6ea8e3c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:36.580+0000 7f6ea8e3c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: ms_deliver_dispatch: unhandled message 0x564aa4d00d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: mgr load Constructed class from module: dashboard
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: [dashboard INFO root] server: ssl=no host=:: port=8443
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: [dashboard INFO root] Starting engine...
Oct  9 09:35:36 compute-1 ceph-mgr[10116]: [dashboard INFO root] Engine started...
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:38 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:36] ENGINE Bus STARTING
Oct  9 09:35:38 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:36] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:35:38 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:37] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:35:38 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:37] ENGINE Client ('192.168.122.100', 44370) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:35:38 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:37] ENGINE Bus STARTED
Oct  9 09:35:38 compute-1 ceph-mon[9795]: Adjusting osd_memory_target on compute-0 to 128.5M
Oct  9 09:35:38 compute-1 ceph-mon[9795]: Adjusting osd_memory_target on compute-1 to 128.5M
Oct  9 09:35:38 compute-1 ceph-mon[9795]: Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:38 compute-1 ceph-mon[9795]: Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:38 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:35:38 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:35:38 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:38 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:38 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:38 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:38 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:39 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:39 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:39 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:39 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:39 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-1 ceph-mon[9795]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  1: '-n'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  2: 'mgr.compute-1.etokpp'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  3: '-f'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  4: '--setuser'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  5: 'ceph'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  6: '--setgroup'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  7: 'ceph'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 09:35:39 compute-1 ceph-mgr[10116]: mgr respawn  exe_path /proc/self/exe
Oct  9 09:35:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setuser ceph since I am not root
Oct  9 09:35:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setgroup ceph since I am not root
Oct  9 09:35:40 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Oct  9 09:35:40 compute-1 systemd[1]: session-17.scope: Consumed 2.993s CPU time.
Oct  9 09:35:40 compute-1 systemd-logind[798]: Session 17 logged out. Waiting for processes to exit.
Oct  9 09:35:40 compute-1 ceph-mgr[10116]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:40 compute-1 systemd-logind[798]: Removed session 17.
Oct  9 09:35:40 compute-1 ceph-mgr[10116]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:40.163+0000 7f393bbf7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:40.234+0000 7f393bbf7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:40 compute-1 ceph-mon[9795]: Deploying daemon node-exporter.compute-0 on compute-0
Oct  9 09:35:40 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 09:35:40 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 09:35:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'crash'
Oct  9 09:35:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:40.940+0000 7f393bbf7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:41.484+0000 7f393bbf7140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:41 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:41.625+0000 7f393bbf7140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'influx'
Oct  9 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:41.687+0000 7f393bbf7140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'insights'
Oct  9 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:41.807+0000 7f393bbf7140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:42 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 09:35:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:42.652+0000 7f393bbf7140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:42.839+0000 7f393bbf7140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:42.910+0000 7f393bbf7140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:42.971+0000 7f393bbf7140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:43.039+0000 7f393bbf7140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'progress'
Oct  9 09:35:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:43.102+0000 7f393bbf7140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:43.399+0000 7f393bbf7140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:43.484+0000 7f393bbf7140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'restful'
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:43.865+0000 7f393bbf7140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rook'
Oct  9 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.346+0000 7f393bbf7140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.407+0000 7f393bbf7140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.478+0000 7f393bbf7140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'stats'
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'status'
Oct  9 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.607+0000 7f393bbf7140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.670+0000 7f393bbf7140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.805+0000 7f393bbf7140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:44.997+0000 7f393bbf7140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:45.225+0000 7f393bbf7140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:45.285+0000 7f393bbf7140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: ms_deliver_dispatch: unhandled message 0x5558da4fd860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setuser ceph since I am not root
Oct  9 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setgroup ceph since I am not root
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:45.462+0000 7f4c12e9d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct  9 09:35:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:45.536+0000 7f4c12e9d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'crash'
Oct  9 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:46.191+0000 7f4c12e9d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:46 compute-1 ceph-mon[9795]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:46 compute-1 ceph-mon[9795]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:46.725+0000 7f4c12e9d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:46.863+0000 7f4c12e9d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'influx'
Oct  9 09:35:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:46.924+0000 7f4c12e9d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'insights'
Oct  9 09:35:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:47.042+0000 7f4c12e9d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:47.884+0000 7f4c12e9d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.070+0000 7f4c12e9d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.135+0000 7f4c12e9d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.193+0000 7f4c12e9d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.260+0000 7f4c12e9d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'progress'
Oct  9 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.321+0000 7f4c12e9d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.614+0000 7f4c12e9d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:48.697+0000 7f4c12e9d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'restful'
Oct  9 09:35:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.070+0000 7f4c12e9d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rook'
Oct  9 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.543+0000 7f4c12e9d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.604+0000 7f4c12e9d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.673+0000 7f4c12e9d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'stats'
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'status'
Oct  9 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.801+0000 7f4c12e9d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.862+0000 7f4c12e9d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:49.995+0000 7f4c12e9d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:50 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:50.184+0000 7f4c12e9d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:50 compute-1 systemd[1]: Stopping User Manager for UID 42477...
Oct  9 09:35:50 compute-1 systemd[2766]: Activating special unit Exit the Session...
Oct  9 09:35:50 compute-1 systemd[2766]: Stopped target Main User Target.
Oct  9 09:35:50 compute-1 systemd[2766]: Stopped target Basic System.
Oct  9 09:35:50 compute-1 systemd[2766]: Stopped target Paths.
Oct  9 09:35:50 compute-1 systemd[2766]: Stopped target Sockets.
Oct  9 09:35:50 compute-1 systemd[2766]: Stopped target Timers.
Oct  9 09:35:50 compute-1 systemd[2766]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:35:50 compute-1 systemd[2766]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:35:50 compute-1 systemd[2766]: Closed D-Bus User Message Bus Socket.
Oct  9 09:35:50 compute-1 systemd[2766]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:35:50 compute-1 systemd[2766]: Removed slice User Application Slice.
Oct  9 09:35:50 compute-1 systemd[2766]: Reached target Shutdown.
Oct  9 09:35:50 compute-1 systemd[2766]: Finished Exit the Session.
Oct  9 09:35:50 compute-1 systemd[2766]: Reached target Exit the Session.
Oct  9 09:35:50 compute-1 systemd[1]: user@42477.service: Deactivated successfully.
Oct  9 09:35:50 compute-1 systemd[1]: Stopped User Manager for UID 42477.
Oct  9 09:35:50 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  9 09:35:50 compute-1 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  9 09:35:50 compute-1 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  9 09:35:50 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  9 09:35:50 compute-1 systemd[1]: Removed slice User Slice of UID 42477.
Oct  9 09:35:50 compute-1 systemd[1]: user-42477.slice: Consumed 47.740s CPU time.
Oct  9 09:35:50 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:50.415+0000 7f4c12e9d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:50 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:35:50.475+0000 7f4c12e9d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: mgr load Constructed class from module: dashboard
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: [dashboard INFO root] server: ssl=no host=:: port=8443
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: [dashboard INFO root] Starting engine...
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: ms_deliver_dispatch: unhandled message 0x556a8c7cb860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct  9 09:35:50 compute-1 ceph-mgr[10116]: [dashboard INFO root] Engine started...
Oct  9 09:35:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct  9 09:35:51 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Oct  9 09:35:51 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  9 09:35:51 compute-1 systemd-logind[798]: New session 18 of user ceph-admin.
Oct  9 09:35:51 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  9 09:35:51 compute-1 systemd[1]: Starting User Manager for UID 42477...
Oct  9 09:35:51 compute-1 systemd[11486]: Queued start job for default target Main User Target.
Oct  9 09:35:51 compute-1 systemd[11486]: Created slice User Application Slice.
Oct  9 09:35:51 compute-1 systemd[11486]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:35:51 compute-1 systemd[11486]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:35:51 compute-1 systemd[11486]: Reached target Paths.
Oct  9 09:35:51 compute-1 systemd[11486]: Reached target Timers.
Oct  9 09:35:51 compute-1 systemd[11486]: Starting D-Bus User Message Bus Socket...
Oct  9 09:35:51 compute-1 systemd[11486]: Starting Create User's Volatile Files and Directories...
Oct  9 09:35:51 compute-1 systemd[11486]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:35:51 compute-1 systemd[11486]: Finished Create User's Volatile Files and Directories.
Oct  9 09:35:51 compute-1 systemd[11486]: Reached target Sockets.
Oct  9 09:35:51 compute-1 systemd[11486]: Reached target Basic System.
Oct  9 09:35:51 compute-1 systemd[11486]: Reached target Main User Target.
Oct  9 09:35:51 compute-1 systemd[11486]: Startup finished in 87ms.
Oct  9 09:35:51 compute-1 systemd[1]: Started User Manager for UID 42477.
Oct  9 09:35:51 compute-1 systemd[1]: Started Session 18 of User ceph-admin.
Oct  9 09:35:51 compute-1 ceph-mon[9795]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:51 compute-1 ceph-mon[9795]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:51 compute-1 ceph-mon[9795]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:35:51 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:35:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e2 new map
Oct  9 09:35:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e2 print_map#012e2#012btime 2025-10-09T09:35:51:790448+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:35:51.790428+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Oct  9 09:35:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct  9 09:35:51 compute-1 podman[11606]: 2025-10-09 09:35:51.810457209 +0000 UTC m=+0.039848377 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct  9 09:35:51 compute-1 podman[11606]: 2025-10-09 09:35:51.889970301 +0000 UTC m=+0.119361459 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  9 09:35:52 compute-1 ceph-mon[9795]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  9 09:35:52 compute-1 ceph-mon[9795]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  9 09:35:52 compute-1 ceph-mon[9795]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:52] ENGINE Bus STARTING
Oct  9 09:35:52 compute-1 ceph-mon[9795]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:52 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:52] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:35:53 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:52] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:35:53 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:52] ENGINE Bus STARTED
Oct  9 09:35:53 compute-1 ceph-mon[9795]: [09/Oct/2025:09:35:52] ENGINE Client ('192.168.122.100', 36178) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:53 compute-1 ceph-mon[9795]: Adjusting osd_memory_target on compute-1 to 128.5M
Oct  9 09:35:53 compute-1 ceph-mon[9795]: Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:53 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:35:53 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct  9 09:35:55 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:55 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:55 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct  9 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct  9 09:35:55 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:55 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:55 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct  9 09:35:55 compute-1 systemd[1]: Reloading.
Oct  9 09:35:55 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:55 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:55 compute-1 systemd[1]: Reloading.
Oct  9 09:35:55 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:55 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:55 compute-1 systemd[1]: Starting Ceph node-exporter.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:35:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:55 compute-1 bash[12912]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct  9 09:35:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct  9 09:35:56 compute-1 ceph-mon[9795]: Deploying daemon node-exporter.compute-1 on compute-1
Oct  9 09:35:56 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct  9 09:35:56 compute-1 ceph-mon[9795]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:56 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:56 compute-1 ceph-mon[9795]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:56 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:56 compute-1 ceph-mon[9795]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:56 compute-1 bash[12912]: Getting image source signatures
Oct  9 09:35:56 compute-1 bash[12912]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct  9 09:35:56 compute-1 bash[12912]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct  9 09:35:56 compute-1 bash[12912]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct  9 09:35:56 compute-1 bash[12912]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct  9 09:35:56 compute-1 bash[12912]: Writing manifest to image destination
Oct  9 09:35:56 compute-1 podman[12912]: 2025-10-09 09:35:56.842351942 +0000 UTC m=+1.213163415 container create 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:56 compute-1 podman[12912]: 2025-10-09 09:35:56.833360521 +0000 UTC m=+1.204172015 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct  9 09:35:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac037a0d617511958afad4153ee7390f0013c07eee65864bb14f6c9129d06cfc/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:56 compute-1 podman[12912]: 2025-10-09 09:35:56.871614298 +0000 UTC m=+1.242425793 container init 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:56 compute-1 podman[12912]: 2025-10-09 09:35:56.875809206 +0000 UTC m=+1.246620680 container start 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:56 compute-1 bash[12912]: 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.879Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.879Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.880Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.880Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.880Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.880Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  9 09:35:56 compute-1 systemd[1]: Started Ceph node-exporter.compute-1 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=arp
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=bcache
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=bonding
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=cpu
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=dmi
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=edac
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=entropy
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=filefd
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=hwmon
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.883Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=netclass
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=netdev
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=netstat
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=nfs
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=nvme
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=os
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=pressure
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=rapl
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=selinux
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=softnet
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=stat
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=textfile
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=time
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.884Z caller=node_exporter.go:117 level=info collector=uname
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.885Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.885Z caller=node_exporter.go:117 level=info collector=xfs
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.885Z caller=node_exporter.go:117 level=info collector=zfs
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.885Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct  9 09:35:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1[12975]: ts=2025-10-09T09:35:56.885Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct  9 09:35:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  9 09:35:57 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  9 09:35:57 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:57 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:57 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:58 compute-1 ceph-mon[9795]: Deploying daemon node-exporter.compute-2 on compute-2
Oct  9 09:35:58 compute-1 ceph-mon[9795]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  9 09:36:00 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/1429686175' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  9 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:36:00 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:36:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:01 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:01 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  9 09:36:01 compute-1 ceph-mon[9795]: Deploying daemon osd.2 on compute-2
Oct  9 09:36:03 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:03 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:04 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:04 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-1 ceph-mon[9795]: Deploying daemon rgw.rgw.compute-2.mbbcec on compute-2
Oct  9 09:36:06 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.862100951 +0000 UTC m=+0.025105341 container create 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:06 compute-1 systemd[1]: Started libpod-conmon-875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4.scope.
Oct  9 09:36:06 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.911725806 +0000 UTC m=+0.074730217 container init 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.915940502 +0000 UTC m=+0.078944893 container start 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.916899651 +0000 UTC m=+0.079904041 container attach 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  9 09:36:06 compute-1 intelligent_lovelace[13082]: 167 167
Oct  9 09:36:06 compute-1 systemd[1]: libpod-875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4.scope: Deactivated successfully.
Oct  9 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.919033053 +0000 UTC m=+0.082037463 container died 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:36:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-ffadcff2f4b7c8d7a3ef7ca68e7b70f0e98ca5e82e1b095915efcde60d7d0358-merged.mount: Deactivated successfully.
Oct  9 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.944275622 +0000 UTC m=+0.107280011 container remove 875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:06 compute-1 podman[13068]: 2025-10-09 09:36:06.85158725 +0000 UTC m=+0.014591660 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:06 compute-1 systemd[1]: libpod-conmon-875b63208c9c80402184ea289f85cf8a8335d3be517df83a382d09e82e9a0ad4.scope: Deactivated successfully.
Oct  9 09:36:06 compute-1 systemd[1]: Reloading.
Oct  9 09:36:07 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:07 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:07 compute-1 systemd[1]: Reloading.
Oct  9 09:36:07 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:07 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:07 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.fxnvnn for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Oct  9 09:36:07 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719329378' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  9 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:07 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-1 ceph-mon[9795]: Deploying daemon rgw.rgw.compute-1.fxnvnn on compute-1
Oct  9 09:36:07 compute-1 ceph-mon[9795]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  9 09:36:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Oct  9 09:36:07 compute-1 podman[13215]: 2025-10-09 09:36:07.530753894 +0000 UTC m=+0.028193684 container create 43272f5fbc7b06cfa3a5e91acf2ff34586a7679d5fd0d0b2fed02ec9e020a8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-1-fxnvnn, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:36:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b892584c111a9eec031e8afd71142ac431487b8bc4c0aa7195955bee310af347/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b892584c111a9eec031e8afd71142ac431487b8bc4c0aa7195955bee310af347/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b892584c111a9eec031e8afd71142ac431487b8bc4c0aa7195955bee310af347/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b892584c111a9eec031e8afd71142ac431487b8bc4c0aa7195955bee310af347/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.fxnvnn supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:07 compute-1 podman[13215]: 2025-10-09 09:36:07.575288172 +0000 UTC m=+0.072727952 container init 43272f5fbc7b06cfa3a5e91acf2ff34586a7679d5fd0d0b2fed02ec9e020a8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-1-fxnvnn, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  9 09:36:07 compute-1 podman[13215]: 2025-10-09 09:36:07.57935029 +0000 UTC m=+0.076790071 container start 43272f5fbc7b06cfa3a5e91acf2ff34586a7679d5fd0d0b2fed02ec9e020a8bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-1-fxnvnn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 09:36:07 compute-1 bash[13215]: 43272f5fbc7b06cfa3a5e91acf2ff34586a7679d5fd0d0b2fed02ec9e020a8bd
Oct  9 09:36:07 compute-1 podman[13215]: 2025-10-09 09:36:07.51791501 +0000 UTC m=+0.015354810 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:07 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.fxnvnn for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:07 compute-1 radosgw[13231]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:36:07 compute-1 radosgw[13231]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct  9 09:36:07 compute-1 radosgw[13231]: framework: beast
Oct  9 09:36:07 compute-1 radosgw[13231]: framework conf key: endpoint, val: 192.168.122.101:8082
Oct  9 09:36:07 compute-1 radosgw[13231]: init_numa not setting numa affinity
Oct  9 09:36:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e33 e33: 3 total, 2 up, 3 in
Oct  9 09:36:08 compute-1 ceph-mon[9795]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  9 09:36:08 compute-1 ceph-mon[9795]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  9 09:36:08 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/573248088' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 09:36:08 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:08 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.918492317s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.938423157s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.918516159s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.938461304s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.1d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.918492317s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938423157s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.1c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.918516159s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938461304s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922034264s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.942253113s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.5( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922034264s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942253113s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922046661s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.942306519s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922046661s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922318459s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.942604065s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.922318459s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942604065s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.921983719s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.942306519s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.12( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.921983719s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.921921730s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 active pruub 90.942375183s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 33 pg[2.18( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=33 pruub=13.921921730s) [] r=-1 lpr=33 pi=[16,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942375183s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:09 compute-1 ceph-mon[9795]: Deploying daemon rgw.rgw.compute-0.yciajn on compute-0
Oct  9 09:36:09 compute-1 ceph-mon[9795]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct  9 09:36:09 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  9 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:09 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Oct  9 09:36:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct  9 09:36:09 compute-1 ceph-mon[9795]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[10.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.1c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.053833961s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938461304s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.1d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.053790092s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938423157s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.1c( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.053812981s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938461304s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.1d( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.053765297s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.938423157s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.5( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057502747s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942253113s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.5( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057479858s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942253113s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057463646s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057728767s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942604065s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.b( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057424545s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.f( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057720184s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942604065s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.12( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057360649s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.12( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057330132s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942306519s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.18( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057376862s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942375183s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 34 pg[2.18( empty local-lis/les=16/17 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34 pruub=12.057367325s) [2] r=-1 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.942375183s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:36:10 compute-1 ceph-mon[9795]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 09:36:10 compute-1 ceph-mon[9795]: Deploying daemon mds.cephfs.compute-2.zfggbi on compute-2
Oct  9 09:36:10 compute-1 ceph-mon[9795]: OSD bench result of 22080.768566 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:36:10 compute-1 ceph-mon[9795]: osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867] boot
Oct  9 09:36:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:10 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Oct  9 09:36:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 35 pg[10.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [0] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e3 new map
Oct  9 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e3 print_map#012e3#012btime 2025-10-09T09:36:10:513915+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:35:51.790428+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.zfggbi{-1:14535} state up:standby seq 1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e4 new map
Oct  9 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e4 print_map#012e4#012btime 2025-10-09T09:36:10:526987+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:10.526981+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.zfggbi{0:14535} state up:creating seq 1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Oct  9 09:36:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:11 compute-1 irqbalance[794]: Cannot change IRQ 44 affinity: Operation not permitted
Oct  9 09:36:11 compute-1 irqbalance[794]: IRQ 44 affinity is now unmanaged
Oct  9 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct  9 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  9 09:36:11 compute-1 ceph-mon[9795]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:11 compute-1 ceph-mon[9795]: Deploying daemon mds.cephfs.compute-0.wjwyle on compute-0
Oct  9 09:36:11 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:11 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:11 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:11 compute-1 ceph-mon[9795]: daemon mds.cephfs.compute-2.zfggbi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  9 09:36:11 compute-1 ceph-mon[9795]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  9 09:36:11 compute-1 ceph-mon[9795]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  9 09:36:11 compute-1 ceph-mon[9795]: Cluster is now healthy
Oct  9 09:36:11 compute-1 ceph-mon[9795]: daemon mds.cephfs.compute-2.zfggbi is now active in filesystem cephfs as rank 0
Oct  9 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:11 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e5 new map
Oct  9 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e5 print_map#012e5#012btime 2025-10-09T09:36:11:555720+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:11.555718+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e6 new map
Oct  9 09:36:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e6 print_map#012e6#012btime 2025-10-09T09:36:11:561187+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:11.555718+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:11 compute-1 podman[13901]: 2025-10-09 09:36:11.567379229 +0000 UTC m=+0.028399231 container create d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:36:11 compute-1 systemd[1]: Started libpod-conmon-d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3.scope.
Oct  9 09:36:11 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:36:11 compute-1 podman[13901]: 2025-10-09 09:36:11.620917863 +0000 UTC m=+0.081937875 container init d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:11 compute-1 podman[13901]: 2025-10-09 09:36:11.625268544 +0000 UTC m=+0.086288547 container start d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:36:11 compute-1 podman[13901]: 2025-10-09 09:36:11.626369661 +0000 UTC m=+0.087389663 container attach d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct  9 09:36:11 compute-1 wonderful_lehmann[13914]: 167 167
Oct  9 09:36:11 compute-1 systemd[1]: libpod-d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3.scope: Deactivated successfully.
Oct  9 09:36:11 compute-1 conmon[13914]: conmon d229c1b31e84b143e13c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3.scope/container/memory.events
Oct  9 09:36:11 compute-1 podman[13901]: 2025-10-09 09:36:11.555854391 +0000 UTC m=+0.016874393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:11 compute-1 podman[13919]: 2025-10-09 09:36:11.662344493 +0000 UTC m=+0.018315401 container died d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-11a49a17acf61386b0f59e23076f64b8c93169d2550219a6a7ce77af364e652e-merged.mount: Deactivated successfully.
Oct  9 09:36:11 compute-1 podman[13919]: 2025-10-09 09:36:11.679400517 +0000 UTC m=+0.035371416 container remove d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:36:11 compute-1 systemd[1]: libpod-conmon-d229c1b31e84b143e13c7a0d26695527201c228ba5c301424c77646b028bb0f3.scope: Deactivated successfully.
Oct  9 09:36:11 compute-1 systemd[1]: Reloading.
Oct  9 09:36:11 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:11 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:11 compute-1 systemd[1]: Reloading.
Oct  9 09:36:11 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:11 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:12 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.svghvn for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:12 compute-1 podman[14047]: 2025-10-09 09:36:12.25074578 +0000 UTC m=+0.025355843 container create fb756edb7283d84213bd667f395c4b27ab3945bcd18c5610b45b94e654cf545d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-1-svghvn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef9511872f12340b6a4a1dbff3bb394d4998224e35c537fe7724b2b233ebd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef9511872f12340b6a4a1dbff3bb394d4998224e35c537fe7724b2b233ebd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef9511872f12340b6a4a1dbff3bb394d4998224e35c537fe7724b2b233ebd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3ef9511872f12340b6a4a1dbff3bb394d4998224e35c537fe7724b2b233ebd0/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.svghvn supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:12 compute-1 podman[14047]: 2025-10-09 09:36:12.285538643 +0000 UTC m=+0.060148716 container init fb756edb7283d84213bd667f395c4b27ab3945bcd18c5610b45b94e654cf545d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-1-svghvn, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:36:12 compute-1 podman[14047]: 2025-10-09 09:36:12.291623585 +0000 UTC m=+0.066233638 container start fb756edb7283d84213bd667f395c4b27ab3945bcd18c5610b45b94e654cf545d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-1-svghvn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:12 compute-1 bash[14047]: fb756edb7283d84213bd667f395c4b27ab3945bcd18c5610b45b94e654cf545d
Oct  9 09:36:12 compute-1 podman[14047]: 2025-10-09 09:36:12.239958554 +0000 UTC m=+0.014568627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:12 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.svghvn for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:12 compute-1 ceph-mds[14063]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:36:12 compute-1 ceph-mds[14063]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct  9 09:36:12 compute-1 ceph-mds[14063]: main not setting numa affinity
Oct  9 09:36:12 compute-1 ceph-mds[14063]: pidfile_write: ignore empty --pid-file
Oct  9 09:36:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-1-svghvn[14059]: starting mds.cephfs.compute-1.svghvn at 
Oct  9 09:36:12 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Updating MDS map to version 6 from mon.2
Oct  9 09:36:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct  9 09:36:12 compute-1 ceph-mon[9795]: Deploying daemon mds.cephfs.compute-1.svghvn on compute-1
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e7 new map
Oct  9 09:36:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e7 print_map#012e7#012btime 2025-10-09T09:36:12:564873+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:11.555718+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:12 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Updating MDS map to version 7 from mon.2
Oct  9 09:36:12 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Monitors have assigned me to become a standby
Oct  9 09:36:13 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct  9 09:36:13 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  9 09:36:13 compute-1 ceph-mon[9795]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:13 compute-1 ceph-mon[9795]: Deploying daemon alertmanager.compute-0 on compute-0
Oct  9 09:36:13 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 38 pg[12.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [0] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct  9 09:36:14 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 39 pg[12.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [0] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  9 09:36:14 compute-1 ceph-mon[9795]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:14 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:14 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-1 ceph-mon[9795]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e8 new map
Oct  9 09:36:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e8 print_map#012e8#012btime 2025-10-09T09:36:15:540254+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:14.585925+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:15 compute-1 radosgw[13231]: v1 topic migration: starting v1 topic migration..
Oct  9 09:36:15 compute-1 radosgw[13231]: LDAP not started since no server URIs were provided in the configuration.
Oct  9 09:36:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-1-fxnvnn[13227]: 2025-10-09T09:36:15.600+0000 7ff35366c980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  9 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-1 radosgw[13231]: v1 topic migration: finished v1 topic migration
Oct  9 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-1 radosgw[13231]: framework: beast
Oct  9 09:36:15 compute-1 radosgw[13231]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  9 09:36:15 compute-1 radosgw[13231]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  9 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-1 radosgw[13231]: starting handler: beast
Oct  9 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-1 radosgw[13231]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:36:15 compute-1 radosgw[13231]: mgrc service_daemon_register rgw.24296 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.fxnvnn,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865152,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=773beadf-adcd-43ff-a482-a2d7a5b40bd8,zone_name=default,zonegroup_id=74fea7f9-d931-4447-a756-db2299521313,zonegroup_name=default}
Oct  9 09:36:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct  9 09:36:16 compute-1 ceph-mon[9795]: Regenerating cephadm self-signed grafana TLS certificates
Oct  9 09:36:16 compute-1 ceph-mon[9795]: Deploying daemon grafana.compute-0 on compute-0
Oct  9 09:36:16 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:16 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e9 new map
Oct  9 09:36:16 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).mds e9 print_map#012e9#012btime 2025-10-09T09:36:16:832969+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:14.585925+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:16 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Updating MDS map to version 9 from mon.2
Oct  9 09:36:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:22 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-1 ceph-mon[9795]: Deploying daemon haproxy.rgw.default.compute-0.kmcywb on compute-0
Oct  9 09:36:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:26 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.002000020s ======
Oct  9 09:36:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:27.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Oct  9 09:36:27 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-1 ceph-mon[9795]: Deploying daemon haproxy.rgw.default.compute-2.gkeojf on compute-2
Oct  9 09:36:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:29.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:29 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:30.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:30 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:36:30 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:36:30 compute-1 ceph-mon[9795]: Deploying daemon keepalived.rgw.default.compute-2.tcjodw on compute-2
Oct  9 09:36:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:31.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:33.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:34.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:35 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:35 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:35 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:35 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:36:35 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:36:35 compute-1 ceph-mon[9795]: Deploying daemon keepalived.rgw.default.compute-0.uozjha on compute-0
Oct  9 09:36:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:35.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:37.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:38 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:39 compute-1 ceph-mon[9795]: Deploying daemon prometheus.compute-0 on compute-0
Oct  9 09:36:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:39.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:40.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:41.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:41 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:42.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:43.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:43 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:43 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:43 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:43 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct  9 09:36:43 compute-1 ceph-mgr[10116]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:36:44 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Oct  9 09:36:44 compute-1 systemd[1]: session-18.scope: Consumed 4.463s CPU time.
Oct  9 09:36:44 compute-1 systemd-logind[798]: Session 18 logged out. Waiting for processes to exit.
Oct  9 09:36:44 compute-1 systemd-logind[798]: Removed session 18.
Oct  9 09:36:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setuser ceph since I am not root
Oct  9 09:36:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: ignoring --setgroup ceph since I am not root
Oct  9 09:36:44 compute-1 ceph-mgr[10116]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:36:44 compute-1 ceph-mgr[10116]: pidfile_write: ignore empty --pid-file
Oct  9 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'alerts'
Oct  9 09:36:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:44.167+0000 7f9352cb8140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'balancer'
Oct  9 09:36:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:44.238+0000 7f9352cb8140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'cephadm'
Oct  9 09:36:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:44.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'crash'
Oct  9 09:36:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:44.908+0000 7f9352cb8140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'dashboard'
Oct  9 09:36:44 compute-1 ceph-mon[9795]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct  9 09:36:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:45.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:45.455+0000 7f9352cb8140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]:  from numpy import show_config as show_numpy_config
Oct  9 09:36:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:45.602+0000 7f9352cb8140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'influx'
Oct  9 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:45.666+0000 7f9352cb8140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'insights'
Oct  9 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'iostat'
Oct  9 09:36:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:45.794+0000 7f9352cb8140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'localpool'
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'mirroring'
Oct  9 09:36:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:46.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'nfs'
Oct  9 09:36:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:46.650+0000 7f9352cb8140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:36:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:46.838+0000 7f9352cb8140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:36:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:46.904+0000 7f9352cb8140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'osd_support'
Oct  9 09:36:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:46.962+0000 7f9352cb8140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:36:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:47.031+0000 7f9352cb8140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'progress'
Oct  9 09:36:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:47.093+0000 7f9352cb8140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'prometheus'
Oct  9 09:36:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:47.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:47.391+0000 7f9352cb8140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:36:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:47.476+0000 7f9352cb8140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'restful'
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rgw'
Oct  9 09:36:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:47.852+0000 7f9352cb8140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'rook'
Oct  9 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.337+0000 7f9352cb8140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'selftest'
Oct  9 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.400+0000 7f9352cb8140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:36:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:48.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.470+0000 7f9352cb8140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'stats'
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'status'
Oct  9 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.600+0000 7f9352cb8140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telegraf'
Oct  9 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.662+0000 7f9352cb8140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'telemetry'
Oct  9 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.795+0000 7f9352cb8140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:36:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:48.985+0000 7f9352cb8140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'volumes'
Oct  9 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:49.214+0000 7f9352cb8140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: mgr[py] Loading python module 'zabbix'
Oct  9 09:36:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:49.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 2025-10-09T09:36:49.276+0000 7f9352cb8140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: mgr load Constructed class from module: dashboard
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: mgr load Constructed class from module: prometheus
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [dashboard INFO root] server: ssl=no host=:: port=8443
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [dashboard INFO root] Starting engine...
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: ms_deliver_dispatch: unhandled message 0x56303d943860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO root] Starting engine...
Oct  9 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: [09/Oct/2025:09:36:49] ENGINE Bus STARTING
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Bus STARTING
Oct  9 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: CherryPy Checker:
Oct  9 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: The Application mounted at '' has an empty config.
Oct  9 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: 
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [dashboard INFO root] Engine started...
Oct  9 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: [09/Oct/2025:09:36:49] ENGINE Serving on http://:::9283
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Serving on http://:::9283
Oct  9 09:36:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-1-etokpp[10112]: [09/Oct/2025:09:36:49] ENGINE Bus STARTED
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Bus STARTED
Oct  9 09:36:49 compute-1 ceph-mgr[10116]: [prometheus INFO root] Engine started.
Oct  9 09:36:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct  9 09:36:49 compute-1 systemd-logind[798]: New session 20 of user ceph-admin.
Oct  9 09:36:49 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Oct  9 09:36:50 compute-1 ceph-mon[9795]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:36:50 compute-1 ceph-mon[9795]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:36:50 compute-1 ceph-mon[9795]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:36:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:36:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:50.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:50 compute-1 podman[14283]: 2025-10-09 09:36:50.450022541 +0000 UTC m=+0.036451010 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 09:36:50 compute-1 podman[14283]: 2025-10-09 09:36:50.532474061 +0000 UTC m=+0.118902530 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:50 compute-1 podman[14379]: 2025-10-09 09:36:50.811015024 +0000 UTC m=+0.033777700 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:50 compute-1 podman[14379]: 2025-10-09 09:36:50.817881379 +0000 UTC m=+0.040644033 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:51.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:51 compute-1 ceph-mon[9795]: [09/Oct/2025:09:36:50] ENGINE Bus STARTING
Oct  9 09:36:51 compute-1 ceph-mon[9795]: [09/Oct/2025:09:36:50] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:36:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-1 ceph-mon[9795]: [09/Oct/2025:09:36:51] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:36:51 compute-1 ceph-mon[9795]: [09/Oct/2025:09:36:51] ENGINE Bus STARTED
Oct  9 09:36:51 compute-1 ceph-mon[9795]: [09/Oct/2025:09:36:51] ENGINE Client ('192.168.122.100', 39912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:36:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:52.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  9 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:36:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:54.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:54 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:36:54 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:36:54 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:36:54 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:54 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:54 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:54 compute-1 ceph-mon[9795]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:54 compute-1 ceph-mon[9795]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:54 compute-1 ceph-mon[9795]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:36:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.217022868 +0000 UTC m=+0.026768445 container create f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:36:55 compute-1 systemd[1]: Started libpod-conmon-f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3.scope.
Oct  9 09:36:55 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:36:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:36:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:55.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.261537669 +0000 UTC m=+0.071283256 container init f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.266172236 +0000 UTC m=+0.075917814 container start f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.267189934 +0000 UTC m=+0.076935511 container attach f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  9 09:36:55 compute-1 ecstatic_heisenberg[15574]: 167 167
Oct  9 09:36:55 compute-1 systemd[1]: libpod-f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3.scope: Deactivated successfully.
Oct  9 09:36:55 compute-1 conmon[15574]: conmon f229624d541024804a0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3.scope/container/memory.events
Oct  9 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.269736014 +0000 UTC m=+0.079481592 container died f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  9 09:36:55 compute-1 systemd[1]: var-lib-containers-storage-overlay-ae7113e6fa95e0ab257ec017615da95b684af17ae82a5181128d7cd4cc5f503c-merged.mount: Deactivated successfully.
Oct  9 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.293616394 +0000 UTC m=+0.103361971 container remove f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_heisenberg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:36:55 compute-1 podman[15561]: 2025-10-09 09:36:55.206069739 +0000 UTC m=+0.015815316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:55 compute-1 systemd[1]: libpod-conmon-f229624d541024804a0b0d39ceadd433efb68ea3ca8421bc6f0b77d887a451d3.scope: Deactivated successfully.
Oct  9 09:36:55 compute-1 systemd[1]: Reloading.
Oct  9 09:36:55 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:55 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:55 compute-1 systemd[1]: Reloading.
Oct  9 09:36:55 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:55 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:55 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress#012service_id: nfs.cephfs#012service_name: ingress.nfs.cephfs#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012spec:#012  backend_service: nfs.cephfs#012  enable_haproxy_protocol: true#012  first_virtual_router_id: 50#012  frontend_port: 2049#012  monitor_port: 9049#012  virtual_ip: 192.168.122.2/24#012''')): max() arg is an empty sequence#012Traceback (most recent call last):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services#012    if self._apply_service(spec):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service#012    daemon_spec = svc.prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create#012    return self.haproxy_prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create#012    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config#012    num_ranks = 1 + max(by_rank.keys())#012ValueError: max() arg is an empty sequence
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.0.0.compute-1.douegr
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  9 09:36:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:36:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Rados config object exists: conf-nfs.cephfs
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.0.0.compute-1.douegr-rgw
Oct  9 09:36:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Bind address in nfs.cephfs.0.0.compute-1.douegr's ganesha conf is defaulting to empty
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Deploying daemon nfs.cephfs.0.0.compute-1.douegr on compute-1
Oct  9 09:36:55 compute-1 ceph-mon[9795]: Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Oct  9 09:36:55 compute-1 podman[15703]: 2025-10-09 09:36:55.880421867 +0000 UTC m=+0.026015516 container create 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  9 09:36:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da54272e3cb2b611f21fd5875326bda1bac5460ce62e8b705d930ee72235bd8/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da54272e3cb2b611f21fd5875326bda1bac5460ce62e8b705d930ee72235bd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da54272e3cb2b611f21fd5875326bda1bac5460ce62e8b705d930ee72235bd8/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5da54272e3cb2b611f21fd5875326bda1bac5460ce62e8b705d930ee72235bd8/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:55 compute-1 podman[15703]: 2025-10-09 09:36:55.924869311 +0000 UTC m=+0.070462980 container init 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:36:55 compute-1 podman[15703]: 2025-10-09 09:36:55.928570008 +0000 UTC m=+0.074163657 container start 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:55 compute-1 bash[15703]: 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793
Oct  9 09:36:55 compute-1 podman[15703]: 2025-10-09 09:36:55.869995491 +0000 UTC m=+0.015589160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:55 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:36:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:55 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct  9 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct  9 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:36:56 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:56 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:36:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:56.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:56 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.1.0.compute-2.cpioam
Oct  9 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:36:56 compute-1 ceph-mon[9795]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  9 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:36:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:36:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct  9 09:36:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:57.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct  9 09:36:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:58.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:36:59 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:36:59 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:36:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:36:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:59.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:37:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:00.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:00 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:00 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:00 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:00 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:37:00 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:00 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:01 compute-1 ceph-mon[9795]: Rados config object exists: conf-nfs.cephfs
Oct  9 09:37:01 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.1.0.compute-2.cpioam-rgw
Oct  9 09:37:01 compute-1 ceph-mon[9795]: Bind address in nfs.cephfs.1.0.compute-2.cpioam's ganesha conf is defaulting to empty
Oct  9 09:37:01 compute-1 ceph-mon[9795]: Deploying daemon nfs.cephfs.1.0.compute-2.cpioam on compute-2
Oct  9 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:37:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:37:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:01.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:02 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy
Oct  9 09:37:02 compute-1 ceph-mon[9795]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  9 09:37:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:37:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:02.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:37:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:03.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:03 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:37:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:37:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:37:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:37:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:37:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:37:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:37:05 compute-1 ceph-mon[9795]: Rados config object exists: conf-nfs.cephfs
Oct  9 09:37:05 compute-1 ceph-mon[9795]: Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw
Oct  9 09:37:05 compute-1 ceph-mon[9795]: Bind address in nfs.cephfs.2.0.compute-0.rlqbpy's ganesha conf is defaulting to empty
Oct  9 09:37:05 compute-1 ceph-mon[9795]: Deploying daemon nfs.cephfs.2.0.compute-0.rlqbpy on compute-0
Oct  9 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:05.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.044986) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626045051, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 5696, "num_deletes": 258, "total_data_size": 19262450, "memory_usage": 20425240, "flush_reason": "Manual Compaction"}
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626065727, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12329900, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 5701, "table_properties": {"data_size": 12308297, "index_size": 13617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6917, "raw_key_size": 66672, "raw_average_key_size": 24, "raw_value_size": 12254607, "raw_average_value_size": 4451, "num_data_blocks": 604, "num_entries": 2753, "num_filter_entries": 2753, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 1760002515, "file_creation_time": 1760002626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 20764 microseconds, and 14365 cpu microseconds.
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.065756) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12329900 bytes OK
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.065770) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.067561) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.067572) EVENT_LOG_v1 {"time_micros": 1760002626067569, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.067582) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 19231344, prev total WAL file size 19233248, number of live WAL files 2.
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070000) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1648B)]
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626070072, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12331548, "oldest_snapshot_seqno": -1}
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 2499 keys, 12326269 bytes, temperature: kUnknown
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626088753, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12326269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12305323, "index_size": 13605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6277, "raw_key_size": 63153, "raw_average_key_size": 25, "raw_value_size": 12254887, "raw_average_value_size": 4903, "num_data_blocks": 602, "num_entries": 2499, "num_filter_entries": 2499, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760002626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.088863) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12326269 bytes
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.089146) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 659.1 rd, 658.8 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.8, 0.0 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2758, records dropped: 259 output_compression: NoCompression
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.089159) EVENT_LOG_v1 {"time_micros": 1760002626089154, "job": 4, "event": "compaction_finished", "compaction_time_micros": 18711, "compaction_time_cpu_micros": 14935, "output_level": 6, "num_output_files": 1, "total_output_size": 12326269, "num_input_records": 2758, "num_output_records": 2499, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626090588, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626090622, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct  9 09:37:06 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:06.069926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.002000020s ======
Oct  9 09:37:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:06.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Oct  9 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:06 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:06 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:37:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:37:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:07.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:37:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:08.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:08 compute-1 podman[15928]: 2025-10-09 09:37:08.650011015 +0000 UTC m=+0.043002220 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True)
Oct  9 09:37:08 compute-1 podman[15928]: 2025-10-09 09:37:08.730870965 +0000 UTC m=+0.123862180 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:37:09 compute-1 podman[16023]: 2025-10-09 09:37:09.01320039 +0000 UTC m=+0.034662717 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:09 compute-1 podman[16023]: 2025-10-09 09:37:09.020847928 +0000 UTC m=+0.042310255 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:09 compute-1 podman[16098]: 2025-10-09 09:37:09.218978529 +0000 UTC m=+0.032755161 container exec 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:09 compute-1 podman[16098]: 2025-10-09 09:37:09.228873093 +0000 UTC m=+0.042649726 container exec_died 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:37:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:09.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:37:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:11 compute-1 ceph-mon[9795]: Deploying daemon haproxy.nfs.cephfs.compute-1.oqhtjo on compute-1
Oct  9 09:37:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:37:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:11.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:37:12 compute-1 ceph-mon[9795]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Oct  9 09:37:12 compute-1 ceph-mon[9795]: Cluster is now healthy
Oct  9 09:37:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:12.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.83672523 +0000 UTC m=+2.241452121 container create 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct  9 09:37:12 compute-1 systemd[1]: Started libpod-conmon-382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600.scope.
Oct  9 09:37:12 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.885413499 +0000 UTC m=+2.290140409 container init 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct  9 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.89023063 +0000 UTC m=+2.294957521 container start 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct  9 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.891342115 +0000 UTC m=+2.296069016 container attach 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct  9 09:37:12 compute-1 exciting_lehmann[16305]: 0 0
Oct  9 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.893893916 +0000 UTC m=+2.298620807 container died 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct  9 09:37:12 compute-1 systemd[1]: libpod-382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600.scope: Deactivated successfully.
Oct  9 09:37:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-b67043b0ab48d77a326e5afd1e602e3b4cbff68e49c4777c651176fb8a7accc3-merged.mount: Deactivated successfully.
Oct  9 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.912310335 +0000 UTC m=+2.317037226 container remove 382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600 (image=quay.io/ceph/haproxy:2.3, name=exciting_lehmann)
Oct  9 09:37:12 compute-1 podman[16207]: 2025-10-09 09:37:12.82702356 +0000 UTC m=+2.231750472 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 09:37:12 compute-1 systemd[1]: libpod-conmon-382c0a787733d387613296f0f79b236168b9fa8a0b4bb01522e38bcf2c579600.scope: Deactivated successfully.
Oct  9 09:37:12 compute-1 systemd[1]: Reloading.
Oct  9 09:37:13 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:13 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:13 compute-1 systemd[1]: Reloading.
Oct  9 09:37:13 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:13 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:13.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:13 compute-1 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.oqhtjo for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:13 compute-1 podman[16439]: 2025-10-09 09:37:13.508246952 +0000 UTC m=+0.027367334 container create 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:37:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/743fa8842ec3c099644aaec4ffea3c5649147df8feaf40b47accfee2667036f6/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:13 compute-1 podman[16439]: 2025-10-09 09:37:13.551168269 +0000 UTC m=+0.070288671 container init 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:37:13 compute-1 podman[16439]: 2025-10-09 09:37:13.555011694 +0000 UTC m=+0.074132077 container start 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:37:13 compute-1 bash[16439]: 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3
Oct  9 09:37:13 compute-1 podman[16439]: 2025-10-09 09:37:13.496590137 +0000 UTC m=+0.015710538 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 09:37:13 compute-1 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.oqhtjo for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:13 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [NOTICE] 281/093713 (2) : New worker #1 (4) forked
Oct  9 09:37:13 compute-1 kernel: ganesha.nfsd[16461]: segfault at 50 ip 00007f0c6750332e sp 00007f0c25ffa210 error 4 likely on CPU 3 (core 0, socket 3)
Oct  9 09:37:13 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:37:13 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[15715]: 09/10/2025 09:37:13 : epoch 68e78237 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bbc000df0 fd 37 proxy ignored for local
Oct  9 09:37:13 compute-1 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct  9 09:37:13 compute-1 systemd[1]: Started Process Core Dump (PID 16464/UID 0).
Oct  9 09:37:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:14.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:14 compute-1 systemd-coredump[16465]: Process 15719 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 52:#012#0  0x00007f0c6750332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:37:14 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-1 ceph-mon[9795]: Deploying daemon haproxy.nfs.cephfs.compute-0.ujrhwc on compute-0
Oct  9 09:37:14 compute-1 systemd[1]: systemd-coredump@0-16464-0.service: Deactivated successfully.
Oct  9 09:37:14 compute-1 podman[16472]: 2025-10-09 09:37:14.660380308 +0000 UTC m=+0.017571207 container died 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-5da54272e3cb2b611f21fd5875326bda1bac5460ce62e8b705d930ee72235bd8-merged.mount: Deactivated successfully.
Oct  9 09:37:14 compute-1 podman[16472]: 2025-10-09 09:37:14.677341993 +0000 UTC m=+0.034532883 container remove 30db809f2ac03989f0e244b2e414c9d250c8954c2dcdcba3127a2c041812d793 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  9 09:37:14 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:37:14 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct  9 09:37:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:15.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:15 compute-1 ceph-mon[9795]: Deploying daemon haproxy.nfs.cephfs.compute-2.iyubhq on compute-2
Oct  9 09:37:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:16.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:17 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:17 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:17 compute-1 ceph-mon[9795]: Deploying daemon keepalived.nfs.cephfs.compute-2.dgxvnq on compute-2
Oct  9 09:37:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:17.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:18.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:18 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:18 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:18 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:18 compute-1 ceph-mon[9795]: Deploying daemon keepalived.nfs.cephfs.compute-1.zabdum on compute-1
Oct  9 09:37:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:19.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093719 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:37:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:20.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.598812742 +0000 UTC m=+2.760373948 container create 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, vcs-type=git, release=1793, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct  9 09:37:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:20 compute-1 systemd[1]: Started libpod-conmon-90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39.scope.
Oct  9 09:37:20 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.655351601 +0000 UTC m=+2.816912827 container init 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, release=1793)
Oct  9 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.587554007 +0000 UTC m=+2.749115232 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.660794953 +0000 UTC m=+2.822356148 container start 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, io.openshift.expose-services=, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, release=1793, io.buildah.version=1.28.2)
Oct  9 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.66272994 +0000 UTC m=+2.824291147 container attach 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, release=1793, name=keepalived, version=2.2.4, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, description=keepalived for Ceph, build-date=2023-02-22T09:23:20)
Oct  9 09:37:20 compute-1 funny_rhodes[16672]: 0 0
Oct  9 09:37:20 compute-1 systemd[1]: libpod-90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39.scope: Deactivated successfully.
Oct  9 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.665956674 +0000 UTC m=+2.827517880 container died 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, name=keepalived, io.openshift.tags=Ceph keepalived, version=2.2.4, description=keepalived for Ceph, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.expose-services=)
Oct  9 09:37:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-4b46b2208a2f8c074f67b954b7996bb4d1053979c20e54507f1dba52105dc27b-merged.mount: Deactivated successfully.
Oct  9 09:37:20 compute-1 podman[16588]: 2025-10-09 09:37:20.683858954 +0000 UTC m=+2.845420160 container remove 90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39 (image=quay.io/ceph/keepalived:2.2.4, name=funny_rhodes, distribution-scope=public, io.openshift.tags=Ceph keepalived, version=2.2.4, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph)
Oct  9 09:37:20 compute-1 systemd[1]: libpod-conmon-90e74662896b689d666b43b37e59ecb37ac0efed905d3da6c23020356422fc39.scope: Deactivated successfully.
Oct  9 09:37:20 compute-1 systemd[1]: Reloading.
Oct  9 09:37:20 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:20 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:20 compute-1 systemd[1]: Reloading.
Oct  9 09:37:20 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:20 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:21 compute-1 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.zabdum for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:21 compute-1 podman[16805]: 2025-10-09 09:37:21.267289504 +0000 UTC m=+0.026296045 container create 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, version=2.2.4, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.display-name=Keepalived on RHEL 9)
Oct  9 09:37:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:21.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7290c0f81c2587cecd84864f4e16047095eaae794dc2e21771419b98ef5168d2/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:21 compute-1 podman[16805]: 2025-10-09 09:37:21.304254511 +0000 UTC m=+0.063261052 container init 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  9 09:37:21 compute-1 podman[16805]: 2025-10-09 09:37:21.307719654 +0000 UTC m=+0.066726185 container start 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.expose-services=, distribution-scope=public, build-date=2023-02-22T09:23:20, version=2.2.4, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct  9 09:37:21 compute-1 bash[16805]: 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3
Oct  9 09:37:21 compute-1 podman[16805]: 2025-10-09 09:37:21.256502487 +0000 UTC m=+0.015509038 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 09:37:21 compute-1 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.zabdum for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct  9 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct  9 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct  9 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Configuration file /etc/keepalived/keepalived.conf
Oct  9 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct  9 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Starting VRRP child process, pid=4
Oct  9 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: Startup complete
Oct  9 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: (VI_0) Entering BACKUP STATE (init)
Oct  9 09:37:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:21 2025: VRRP_Script(check_backend) succeeded
Oct  9 09:37:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:22.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:22 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:22 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:22 compute-1 ceph-mon[9795]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:22 compute-1 ceph-mon[9795]: Deploying daemon keepalived.nfs.cephfs.compute-0.qjivil on compute-0
Oct  9 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:37:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:23.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:37:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:24.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:24 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 1.
Oct  9 09:37:24 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:24 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:24 2025: (VI_0) Entering MASTER STATE
Oct  9 09:37:24 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:24 2025: (VI_0) Master received advert from 192.168.122.102 with same priority 90 but higher IP address than ours
Oct  9 09:37:24 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum[16817]: Thu Oct  9 09:37:24 2025: (VI_0) Entering BACKUP STATE
Oct  9 09:37:24 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:25 compute-1 podman[16864]: 2025-10-09 09:37:25.116194741 +0000 UTC m=+0.025445963 container create 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct  9 09:37:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f865182eb2db54c494aef1da904f162734e7f32df2284cc048bfd6f178c04dd7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f865182eb2db54c494aef1da904f162734e7f32df2284cc048bfd6f178c04dd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f865182eb2db54c494aef1da904f162734e7f32df2284cc048bfd6f178c04dd7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:25 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f865182eb2db54c494aef1da904f162734e7f32df2284cc048bfd6f178c04dd7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:25 compute-1 podman[16864]: 2025-10-09 09:37:25.150625802 +0000 UTC m=+0.059877044 container init 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:25 compute-1 podman[16864]: 2025-10-09 09:37:25.155224671 +0000 UTC m=+0.064475894 container start 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:25 compute-1 bash[16864]: 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91
Oct  9 09:37:25 compute-1 podman[16864]: 2025-10-09 09:37:25.105783273 +0000 UTC m=+0.015034516 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:25 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:37:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:25 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:25.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:25 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:25 compute-1 systemd-logind[798]: New session 21 of user zuul.
Oct  9 09:37:25 compute-1 systemd[1]: Started Session 21 of User zuul.
Oct  9 09:37:26 compute-1 podman[17106]: 2025-10-09 09:37:26.267671538 +0000 UTC m=+0.040062547 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 09:37:26 compute-1 podman[17106]: 2025-10-09 09:37:26.342070057 +0000 UTC m=+0.114461066 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:26.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:26 compute-1 podman[17301]: 2025-10-09 09:37:26.651903194 +0000 UTC m=+0.036937137 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:26 compute-1 podman[17301]: 2025-10-09 09:37:26.658880256 +0000 UTC m=+0.043914189 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:26 compute-1 python3.9[17268]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:37:26 compute-1 podman[17382]: 2025-10-09 09:37:26.879013959 +0000 UTC m=+0.037600978 container exec 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Oct  9 09:37:26 compute-1 podman[17382]: 2025-10-09 09:37:26.889875645 +0000 UTC m=+0.048462665 container exec_died 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 09:37:27 compute-1 podman[17449]: 2025-10-09 09:37:27.030357325 +0000 UTC m=+0.034439196 container exec 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:37:27 compute-1 podman[17467]: 2025-10-09 09:37:27.087798305 +0000 UTC m=+0.045960928 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:37:27 compute-1 podman[17449]: 2025-10-09 09:37:27.090859115 +0000 UTC m=+0.094940976 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:37:27 compute-1 podman[17501]: 2025-10-09 09:37:27.240156465 +0000 UTC m=+0.050158301 container exec 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.expose-services=, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, architecture=x86_64)
Oct  9 09:37:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:27.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:27 compute-1 podman[17533]: 2025-10-09 09:37:27.300769304 +0000 UTC m=+0.045004175 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, architecture=x86_64, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, distribution-scope=public)
Oct  9 09:37:27 compute-1 podman[17501]: 2025-10-09 09:37:27.303019657 +0000 UTC m=+0.113021483 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, version=2.2.4, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.openshift.expose-services=, name=keepalived)
Oct  9 09:37:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-1 python3.9[17718]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:37:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:28.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:29.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:30.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:31 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:37:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:31 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:31.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.lwqgfy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:37:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:32.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:33 compute-1 ceph-mon[9795]: Reconfiguring mon.compute-0 (monmap changed)...
Oct  9 09:37:33 compute-1 ceph-mon[9795]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 09:37:33 compute-1 ceph-mon[9795]: Reconfiguring mgr.compute-0.lwqgfy (monmap changed)...
Oct  9 09:37:33 compute-1 ceph-mon[9795]: Reconfiguring daemon mgr.compute-0.lwqgfy on compute-0
Oct  9 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  9 09:37:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:33.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:34 compute-1 ceph-mon[9795]: Reconfiguring crash.compute-0 (monmap changed)...
Oct  9 09:37:34 compute-1 ceph-mon[9795]: Reconfiguring daemon crash.compute-0 on compute-0
Oct  9 09:37:34 compute-1 ceph-mon[9795]: Reconfiguring osd.1 (monmap changed)...
Oct  9 09:37:34 compute-1 ceph-mon[9795]: Reconfiguring daemon osd.1 on compute-0
Oct  9 09:37:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:34.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:34 compute-1 systemd[1]: session-21.scope: Deactivated successfully.
Oct  9 09:37:34 compute-1 systemd[1]: session-21.scope: Consumed 6.344s CPU time.
Oct  9 09:37:34 compute-1 systemd-logind[798]: Session 21 logged out. Waiting for processes to exit.
Oct  9 09:37:34 compute-1 systemd-logind[798]: Removed session 21.
Oct  9 09:37:35 compute-1 ceph-mon[9795]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  9 09:37:35 compute-1 ceph-mon[9795]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  9 09:37:35 compute-1 ceph-mon[9795]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  9 09:37:35 compute-1 ceph-mon[9795]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  9 09:37:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:35.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:36 compute-1 ceph-mon[9795]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  9 09:37:36 compute-1 ceph-mon[9795]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct  9 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.474335179 +0000 UTC m=+0.025915363 container create 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:37:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:36.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:36 compute-1 systemd[1]: Started libpod-conmon-94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5.scope.
Oct  9 09:37:36 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.525843784 +0000 UTC m=+0.077423969 container init 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct  9 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.531017994 +0000 UTC m=+0.082598178 container start 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.532119407 +0000 UTC m=+0.083699591 container attach 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:36 compute-1 frosty_shtern[17910]: 167 167
Oct  9 09:37:36 compute-1 systemd[1]: libpod-94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5.scope: Deactivated successfully.
Oct  9 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.535309293 +0000 UTC m=+0.086889476 container died 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-b653a4a6b71d2881fec17506b17a906a83bfb100db98b19196a8f1544af97ae3-merged.mount: Deactivated successfully.
Oct  9 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.552010099 +0000 UTC m=+0.103590273 container remove 94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_shtern, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct  9 09:37:36 compute-1 podman[17896]: 2025-10-09 09:37:36.463473934 +0000 UTC m=+0.015054128 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:36 compute-1 systemd[1]: libpod-conmon-94dbe69ac568adcd3a32cebe53f8c5f84fdde713798aee5d7f1cf78cb770b9c5.scope: Deactivated successfully.
Oct  9 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.911720224 +0000 UTC m=+0.031179841 container create 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:36 compute-1 systemd[1]: Started libpod-conmon-8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb.scope.
Oct  9 09:37:36 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.960248029 +0000 UTC m=+0.079707646 container init 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.964065976 +0000 UTC m=+0.083525593 container start 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.965233123 +0000 UTC m=+0.084692740 container attach 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  9 09:37:36 compute-1 friendly_ritchie[18003]: 167 167
Oct  9 09:37:36 compute-1 systemd[1]: libpod-8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb.scope: Deactivated successfully.
Oct  9 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.967029153 +0000 UTC m=+0.086488770 container died 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-a5aae6e8f51bc76d955cf7132007c269f3aa32ffef0bc1f3dd9df84bd66c950d-merged.mount: Deactivated successfully.
Oct  9 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.984703793 +0000 UTC m=+0.104163411 container remove 8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:37:36 compute-1 podman[17989]: 2025-10-09 09:37:36.898667695 +0000 UTC m=+0.018127311 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:36 compute-1 systemd[1]: libpod-conmon-8d956999deae9782f6f0fee06de75cd6ced1b6e600af9870896d8d601836f7fb.scope: Deactivated successfully.
Oct  9 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-1 ceph-mon[9795]: Reconfiguring crash.compute-1 (monmap changed)...
Oct  9 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:37:37 compute-1 ceph-mon[9795]: Reconfiguring daemon crash.compute-1 on compute-1
Oct  9 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-1 ceph-mon[9795]: Reconfiguring osd.0 (monmap changed)...
Oct  9 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  9 09:37:37 compute-1 ceph-mon[9795]: Reconfiguring daemon osd.0 on compute-1
Oct  9 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:37:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:37.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.410740647 +0000 UTC m=+0.026506806 container create 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:37:37 compute-1 systemd[1]: Started libpod-conmon-787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04.scope.
Oct  9 09:37:37 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.44661226 +0000 UTC m=+0.062378428 container init 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Oct  9 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.45047453 +0000 UTC m=+0.066240688 container start 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.451604697 +0000 UTC m=+0.067370865 container attach 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:37 compute-1 inspiring_lalande[18116]: 167 167
Oct  9 09:37:37 compute-1 systemd[1]: libpod-787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04.scope: Deactivated successfully.
Oct  9 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.454307235 +0000 UTC m=+0.070073392 container died 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.472238688 +0000 UTC m=+0.088004846 container remove 787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_lalande, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:37:37 compute-1 podman[18102]: 2025-10-09 09:37:37.399441348 +0000 UTC m=+0.015207527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:37 compute-1 systemd[1]: var-lib-containers-storage-overlay-ec81c338c7eadd649349f9a69b40910688d15fe287d3bb55d8ef180f42839dbe-merged.mount: Deactivated successfully.
Oct  9 09:37:37 compute-1 systemd[1]: libpod-conmon-787a9eb15ab4d24dffe6547f4a9086fe74456430be1ffc9416fa84e22f6a2d04.scope: Deactivated successfully.
Oct  9 09:37:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:37 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda30000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:38 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:38 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda20001950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:38.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring mon.compute-1 (monmap changed)...
Oct  9 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring daemon mon.compute-1 on compute-1
Oct  9 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring mon.compute-2 (monmap changed)...
Oct  9 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring daemon mon.compute-2 on compute-2
Oct  9 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring mgr.compute-2.takdnm (monmap changed)...
Oct  9 09:37:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:37:38 compute-1 ceph-mon[9795]: Reconfiguring daemon mgr.compute-2.takdnm on compute-2
Oct  9 09:37:38 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:38 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda24001e90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:39 compute-1 podman[18236]: 2025-10-09 09:37:39.048847464 +0000 UTC m=+0.038556878 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct  9 09:37:39 compute-1 podman[18236]: 2025-10-09 09:37:39.124388367 +0000 UTC m=+0.114097771 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:37:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:39.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:39 compute-1 podman[18331]: 2025-10-09 09:37:39.421802981 +0000 UTC m=+0.034931414 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:39 compute-1 podman[18331]: 2025-10-09 09:37:39.429904671 +0000 UTC m=+0.043033104 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  9 09:37:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093739 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:37:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:39 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda20002270 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:39 compute-1 podman[18405]: 2025-10-09 09:37:39.648433717 +0000 UTC m=+0.037481463 container exec 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:39 compute-1 podman[18405]: 2025-10-09 09:37:39.654985999 +0000 UTC m=+0.044033767 container exec_died 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct  9 09:37:39 compute-1 podman[18456]: 2025-10-09 09:37:39.808735488 +0000 UTC m=+0.048128014 container exec 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:37:39 compute-1 podman[18456]: 2025-10-09 09:37:39.818879051 +0000 UTC m=+0.058271578 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:37:39 compute-1 podman[18506]: 2025-10-09 09:37:39.953064825 +0000 UTC m=+0.032993045 container exec 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph)
Oct  9 09:37:39 compute-1 podman[18506]: 2025-10-09 09:37:39.963885142 +0000 UTC m=+0.043813351 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, com.redhat.component=keepalived-container, architecture=x86_64, name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, version=2.2.4, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct  9 09:37:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:40 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda2c001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:40.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:40 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda20002270 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:41.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:41 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda24002990 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:42 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda20002f80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:42.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:42 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda2c0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:43.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:43 compute-1 kernel: ganesha.nfsd[18078]: segfault at 50 ip 00007fdadf75232e sp 00007fdaad7f9210 error 4 in libntirpc.so.5.8[7fdadf737000+2c000] likely on CPU 2 (core 0, socket 2)
Oct  9 09:37:43 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:37:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[16876]: 09/10/2025 09:37:43 : epoch 68e78255 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fda20002f80 fd 37 proxy ignored for local
Oct  9 09:37:43 compute-1 systemd[1]: Started Process Core Dump (PID 18535/UID 0).
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.816778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663816818, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1462, "num_deletes": 251, "total_data_size": 4339487, "memory_usage": 4409664, "flush_reason": "Manual Compaction"}
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663822808, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 2428875, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 5706, "largest_seqno": 7163, "table_properties": {"data_size": 2422768, "index_size": 3178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14731, "raw_average_key_size": 20, "raw_value_size": 2409516, "raw_average_value_size": 3318, "num_data_blocks": 147, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002626, "oldest_key_time": 1760002626, "file_creation_time": 1760002663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6324 microseconds, and 3908 cpu microseconds.
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823106) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 2428875 bytes OK
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823202) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823719) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823730) EVENT_LOG_v1 {"time_micros": 1760002663823727, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823742) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 4332194, prev total WAL file size 4332194, number of live WAL files 2.
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824948) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(2371KB)], [15(11MB)]
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663824969, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14755144, "oldest_snapshot_seqno": -1}
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 2699 keys, 13382293 bytes, temperature: kUnknown
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663857326, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 13382293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13360127, "index_size": 14313, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6789, "raw_key_size": 68577, "raw_average_key_size": 25, "raw_value_size": 13305993, "raw_average_value_size": 4929, "num_data_blocks": 634, "num_entries": 2699, "num_filter_entries": 2699, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760002663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857643) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13382293 bytes
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.858173) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 453.2 rd, 411.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(11.6) write-amplify(5.5) OK, records in: 3225, records dropped: 526 output_compression: NoCompression
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.858189) EVENT_LOG_v1 {"time_micros": 1760002663858182, "job": 6, "event": "compaction_finished", "compaction_time_micros": 32560, "compaction_time_cpu_micros": 16979, "output_level": 6, "num_output_files": 1, "total_output_size": 13382293, "num_input_records": 3225, "num_output_records": 2699, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663858924, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663860354, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.860468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.860472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.860474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.860475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:37:43.860476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:44.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:44 compute-1 systemd-coredump[18536]: Process 16880 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007fdadf75232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:37:44 compute-1 systemd[1]: systemd-coredump@1-18535-0.service: Deactivated successfully.
Oct  9 09:37:44 compute-1 podman[18569]: 2025-10-09 09:37:44.672989095 +0000 UTC m=+0.020896104 container died 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 09:37:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-f865182eb2db54c494aef1da904f162734e7f32df2284cc048bfd6f178c04dd7-merged.mount: Deactivated successfully.
Oct  9 09:37:44 compute-1 podman[18569]: 2025-10-09 09:37:44.689516505 +0000 UTC m=+0.037423515 container remove 7f982dba46c09e32374946c3615304057f1afed02125098457e6bcb96418ba91 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:44 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:37:44 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct  9 09:37:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:45.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:46.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:47.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:48.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:49.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093749 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:37:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct  9 09:37:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:50 compute-1 systemd-logind[798]: New session 22 of user zuul.
Oct  9 09:37:50 compute-1 systemd[1]: Started Session 22 of User zuul.
Oct  9 09:37:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:50.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct  9 09:37:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:51 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:51 compute-1 python3.9[18783]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  9 09:37:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:51.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct  9 09:37:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:52 compute-1 python3.9[18958]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:37:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:52.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct  9 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  9 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  9 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:53 compute-1 python3.9[19114]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:37:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:53.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct  9 09:37:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:54 compute-1 python3.9[19268]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:37:54 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093754 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:37:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:54.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:54 compute-1 python3.9[19422]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:37:54 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 2.
Oct  9 09:37:54 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:54 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct  9 09:37:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 49 pg[7.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=49 pruub=10.453458786s) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active pruub 193.927215576s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:37:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 49 pg[7.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=49 pruub=10.453458786s) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown pruub 193.927215576s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  9 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  9 09:37:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:55 compute-1 podman[19537]: 2025-10-09 09:37:55.142752122 +0000 UTC m=+0.027612848 container create 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:55 compute-1 systemd[1268]: Created slice User Background Tasks Slice.
Oct  9 09:37:55 compute-1 systemd[1268]: Starting Cleanup of User's Temporary Files and Directories...
Oct  9 09:37:55 compute-1 systemd[11486]: Starting Mark boot as successful...
Oct  9 09:37:55 compute-1 systemd[11486]: Finished Mark boot as successful.
Oct  9 09:37:55 compute-1 systemd[1268]: Finished Cleanup of User's Temporary Files and Directories.
Oct  9 09:37:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a7a6480f3b33833b923989e2bfc3794283acd4108411bc5cfd7528ec19f604/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a7a6480f3b33833b923989e2bfc3794283acd4108411bc5cfd7528ec19f604/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a7a6480f3b33833b923989e2bfc3794283acd4108411bc5cfd7528ec19f604/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:55 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a7a6480f3b33833b923989e2bfc3794283acd4108411bc5cfd7528ec19f604/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:55 compute-1 podman[19537]: 2025-10-09 09:37:55.179227982 +0000 UTC m=+0.064088718 container init 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Oct  9 09:37:55 compute-1 podman[19537]: 2025-10-09 09:37:55.184045459 +0000 UTC m=+0.068906185 container start 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:37:55 compute-1 bash[19537]: 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d
Oct  9 09:37:55 compute-1 podman[19537]: 2025-10-09 09:37:55.131570735 +0000 UTC m=+0.016431481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:55 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:37:55 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:37:55 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:37:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:55.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:37:55 compute-1 python3.9[19667]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:37:55 compute-1 network[19685]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:37:55 compute-1 network[19686]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:37:55 compute-1 network[19687]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:37:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1c( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1a( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.19( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.17( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.15( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.16( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.10( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1e( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.e( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.c( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.a( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.8( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.b( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.d( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.2( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.7( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.5( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.14( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.11( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.12( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1d( empty local-lis/les=15/16 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1c( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1a( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.19( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.17( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.c( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.15( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.7( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.d( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.0( empty local-lis/les=49/50 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.12( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 50 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=15/15 les/c/f=16/16/0 sis=49) [0] r=0 lpr=49 pi=[15,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:56 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Oct  9 09:37:56 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Oct  9 09:37:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:56.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct  9 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:57 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct  9 09:37:57 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct  9 09:37:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:57.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct  9 09:37:58 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:58 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:58 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct  9 09:37:58 compute-1 python3.9[19951]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:37:58 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct  9 09:37:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:58.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:58 compute-1 python3.9[20101]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:37:59 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct  9 09:37:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:59 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct  9 09:37:59 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct  9 09:37:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:37:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:59.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:00 compute-1 python3.9[20256]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:38:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct  9 09:38:00 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 53 pg[10.0( v 40'1059 (0'0,40'1059] local-lis/les=34/35 n=178 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=53 pruub=10.394917488s) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 40'1058 mlcod 40'1058 active pruub 198.924484253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:00 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.0( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=53 pruub=10.394917488s) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 40'1058 mlcod 0'0 unknown pruub 198.924484253s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b5087a8 space 0x560c9b2c89d0 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b29b7e8 space 0x560c9b3b5d50 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f82a8 space 0x560c9b25caa0 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4dc8e8 space 0x560c9ae1b050 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f9ba8 space 0x560c9b2cbd50 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4dc3e8 space 0x560c9b314b70 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4dd608 space 0x560c9b3a3390 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4cb1a8 space 0x560c9b3a89d0 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e4528 space 0x560c9b29e760 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b2422a8 space 0x560c9b281530 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e5ba8 space 0x560c9b3b7460 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b508fc8 space 0x560c9b3de690 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4ca028 space 0x560c9b2c4830 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e4f28 space 0x560c9b4712c0 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b508de8 space 0x560c9b33dc80 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f88e8 space 0x560c9b2df940 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f8208 space 0x560c9b361600 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4cbf68 space 0x560c9b4717a0 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e5568 space 0x560c9b471e20 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f9388 space 0x560c9b33d050 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f97e8 space 0x560c9b3160e0 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e5928 space 0x560c9b3dfef0 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4dd568 space 0x560c9b345ae0 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e0528 space 0x560c9b361390 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b509b08 space 0x560c9b4709d0 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e45c8 space 0x560c9b361050 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4e4988 space 0x560c9b275940 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4f8f28 space 0x560c9b122010 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4caca8 space 0x560c9b3a9e20 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b509428 space 0x560c9b3df050 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x560c9bc81b00) operator()   moving buffer(0x560c9b4ca7a8 space 0x560c9b376830 0x0~1000 clean)
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.2( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.3( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.4( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.5( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.6( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.7( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.8( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.9( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.10( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.11( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.12( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.13( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.14( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.15( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.16( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.17( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.18( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.19( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 54 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=34/35 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:00.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:00 compute-1 python3.9[20414]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:38:01 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.10 deep-scrub starts
Oct  9 09:38:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[12.0( v 40'2 (0'0,40'2] local-lis/les=38/39 n=2 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=55 pruub=13.387430191s) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 40'1 mlcod 40'1 active pruub 202.937469482s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:01 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.10 deep-scrub ok
Oct  9 09:38:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[12.0( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=55 pruub=13.387430191s) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 40'1 mlcod 0'0 unknown pruub 202.937469482s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.0( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 40'1058 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.5( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.4( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 55 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=34/34 les/c/f=35/35/0 sis=53) [0] r=0 lpr=53 pi=[34,53)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:01.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:01 compute-1 python3.9[20499]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:38:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:38:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Oct  9 09:38:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.14( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.19( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.18( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1a( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.d( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1f( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.e( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.b( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.c( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.9( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.a( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.6( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.8( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.f( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.3( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.2( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.7( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.4( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1b( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.5( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1e( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1d( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1c( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.13( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.12( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.17( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.16( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.15( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.14( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.19( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.18( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.10( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.d( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.b( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1f( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.9( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.0( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 40'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.6( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.8( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.11( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=38/39 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.2( v 40'2 (0'0,40'2] local-lis/les=55/56 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.f( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1( v 40'2 (0'0,40'2] local-lis/les=55/56 n=1 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.7( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.4( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1b( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.5( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1d( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.1c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.13( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.3( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.17( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.12( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.16( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.15( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.10( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 56 pg[12.11( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=38/38 les/c/f=39/39/0 sis=55) [0] r=0 lpr=55 pi=[38,55)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Oct  9 09:38:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:02.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct  9 09:38:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct  9 09:38:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:03 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:38:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:03 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:38:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:03.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:38:04 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct  9 09:38:04 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct  9 09:38:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:04.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct  9 09:38:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct  9 09:38:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:05.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct  9 09:38:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct  9 09:38:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:06.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:07 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.a deep-scrub starts
Oct  9 09:38:07 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.a deep-scrub ok
Oct  9 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  9 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  9 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.892608643s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.491363525s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.892586708s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.491363525s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.18( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965552330s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564682007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.18( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965538979s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564682007s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.892056465s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.491317749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965418816s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564682007s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.13( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.892033577s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.491317749s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965390205s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564682007s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.891859055s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.491317749s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.11( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.891847610s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.491317749s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954692841s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554214478s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954680443s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554214478s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.891757011s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.491333008s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.14( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.891747475s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.491333008s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.887513161s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.487167358s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.6( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.887503624s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.487167358s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965021133s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564788818s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954455376s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554244995s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.965010643s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564788818s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954442978s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554244995s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.887171745s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.487075806s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.5( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.887162209s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.487075806s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.b( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964696884s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564743042s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.b( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964684486s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564743042s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954156876s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554275513s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964576721s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564758301s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954094887s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554275513s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964565277s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564758301s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.9( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964447975s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564788818s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.9( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964435577s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564788818s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886581421s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.487060547s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.2( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886570930s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.487060547s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.6( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964207649s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564819336s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954155922s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554748535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.6( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964196205s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564819336s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.954111099s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554748535s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886161804s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486968994s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886114120s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486968994s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.8( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963779449s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564849854s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.8( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963766098s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564849854s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963717461s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564834595s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.a( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963700294s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564834595s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885774612s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486968994s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.3( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885764122s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486968994s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.953328133s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554748535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.953315735s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554748535s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.3( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964341164s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565902710s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.3( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.964330673s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565902710s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885399818s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486984253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.4( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885383606s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486984253s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885258675s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486892700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.8( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.885251045s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486892700s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952424049s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554168701s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952407837s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554168701s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.5( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952911377s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=56'1060 lcod 56'1061 mlcod 56'1061 active pruub 205.554779053s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.2( v 40'2 (0'0,40'2] local-lis/les=55/56 n=1 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963657379s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565536499s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.2( v 40'2 (0'0,40'2] local-lis/les=55/56 n=1 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963648796s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565536499s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884939194s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486892700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.9( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884916306s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486892700s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884834290s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486892700s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.5( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952721596s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 unknown NOTIFY pruub 205.554779053s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.a( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884822845s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486892700s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952664375s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554794312s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952656746s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554794312s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952675819s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554885864s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.7( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963363647s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565582275s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952667236s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554885864s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884579659s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486877441s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884572029s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486877441s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.7( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963351250s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565582275s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952442169s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554809570s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952435493s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554809570s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.4( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963177681s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565612793s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.4( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.963169098s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565612793s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952333450s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554840088s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.952325821s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554840088s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884097099s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486679077s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.10( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884088516s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486679077s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.884186745s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486816406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.f( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.883908272s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486816406s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1d( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962710381s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565780640s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1d( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962698936s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565780640s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.951692581s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554916382s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.951681137s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554916382s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962811470s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565750122s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.883295059s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486816406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962224960s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565780640s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1c( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962195396s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565780640s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.16( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.883276939s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486816406s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.13( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961986542s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.565795898s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.13( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961974144s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565795898s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.951109886s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.555023193s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.951101303s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.555023193s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.19( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.960655212s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.564666748s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.882598877s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486679077s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.19( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.960598946s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.564666748s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.18( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.882572174s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486679077s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.11( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962583542s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.566802979s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.11( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962574005s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.566802979s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.950589180s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554931641s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.950563431s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554931641s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.12( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962208748s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.566635132s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.12( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962192535s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.566635132s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.10( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962291718s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.566787720s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.10( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.962280273s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.566787720s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.1e( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961205482s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.565750122s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.882024765s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486663818s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1b( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.881998062s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486663818s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.17( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961801529s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active pruub 206.566635132s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[12.17( v 40'2 (0'0,40'2] local-lis/les=55/56 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=10.961771965s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 206.566635132s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886302948s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.491363525s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1d( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.886291504s) [2] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.491363525s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.949922562s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.555007935s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.949909210s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.555007935s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.881649971s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active pruub 208.486816406s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[7.1e( empty local-lis/les=49/50 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.881640434s) [1] r=-1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 208.486816406s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.949749947s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 205.554946899s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=9.949631691s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.554946899s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.10( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.12( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.12( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.1b( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.17( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.10( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.1a( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.1b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1b( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1a( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.18( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.13( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1c( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.19( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1e( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1d( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.c( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.18( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.14( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.f( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.1( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.5( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.3( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.7( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.1c( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.2( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.1f( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.1b( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.a( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.13( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.d( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.14( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.8( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.c( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.f( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.9( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.10( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.7( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.15( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.a( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.16( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[3.16( empty local-lis/les=0/0 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.d( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.1c( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.12( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.11( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[8.4( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.10( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.e( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.5( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.a( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.5( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[4.e( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[9.6( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 57 pg[5.18( empty local-lis/les=0/0 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:38:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:07.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:38:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.12 deep-scrub starts
Oct  9 09:38:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.12 deep-scrub ok
Oct  9 09:38:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.5( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.5( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.1c( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.10( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.12( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.12( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.12( v 56'69 lc 0'0 (0'0,56'69] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=56'69 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.1f( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.15( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.18( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.1b( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.14( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.1a( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.18( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.14( v 56'99 lc 40'86 (0'0,56'99] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=56'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.1c( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.15( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.1b( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.13( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1b( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.18( v 50'68 lc 43'19 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1a( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.19( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.13( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1c( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.c( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.d( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.a( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.14( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.5( v 40'96 (0'0,40'96] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.a( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.17( v 56'69 lc 0'0 (0'0,56'69] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=56'69 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.6( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=57/58 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.c( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.a( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.d( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.f( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.1( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.f( v 33'9 lc 0'0 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.5( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.d( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.4( v 50'68 (0'0,50'68] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.7( v 40'96 (0'0,40'96] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.9( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.4( v 40'96 (0'0,40'96] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.f( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1( v 40'96 (0'0,40'96] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.3( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.e( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.2( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.7( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.8( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.16( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.1b( v 50'68 lc 41'8 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1d( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.10( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.f( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.10( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[3.16( empty local-lis/les=57/58 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.e( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[11.1e( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[8.10( v 54'71 lc 54'70 (0'0,54'71] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=54'71 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[4.5( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[9.11( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [0] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 58 pg[5.11( empty local-lis/les=57/58 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  9 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  9 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:08.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.945956230s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.555023193s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.945867538s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.555023193s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.945472717s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554992676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.945446014s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554992676s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.944999695s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554885864s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.944981575s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554885864s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.943867683s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554534912s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.943853378s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554534912s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.944041252s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554870605s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.944030762s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554870605s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.943557739s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554870605s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.943533897s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554870605s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.942595482s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.554321289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.942577362s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.554321289s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=4 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.940258980s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.552383423s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=4 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=59 pruub=15.940237045s) [1] r=-1 lpr=59 pi=[53,59)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.552383423s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[6.6( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[6.a( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[6.2( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  9 09:38:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  9 09:38:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  9 09:38:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.5( v 56'1062 (0'0,56'1062] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=56'1062 lcod 56'1061 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 59 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] async=[2] r=0 lpr=58 pi=[53,58)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:38:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:09.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:09 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:10 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:10 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734001e10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=4 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=4 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.008573532s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622787476s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.008420944s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622787476s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007658005s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622253418s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007628441s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622253418s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007069588s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622329712s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007202148s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622451782s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007037163s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622329712s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.5( v 59'1068 (0'0,59'1068] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007274628s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=56'1062 lcod 59'1067 mlcod 59'1067 active pruub 213.622756958s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.5( v 59'1068 (0'0,59'1068] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007144928s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=56'1062 lcod 59'1067 mlcod 0'0 unknown NOTIFY pruub 213.622756958s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006485939s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622177124s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006444931s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622177124s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006690979s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622467041s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006664276s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622467041s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006199837s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622085571s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006181717s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622085571s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.006824493s) [2] async=[2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622589111s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[6.a( v 41'42 (0'0,41'42] local-lis/les=59/60 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[6.2( v 41'42 (0'0,41'42] local-lis/les=59/60 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[6.6( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=59/60 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=59/60 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.007172585s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622451782s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 60 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.005962372s) [2] r=-1 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622589111s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:10.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:10 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:10 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.168472290s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.623825073s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.168426514s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.623825073s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.167302132s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.623062134s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.167387009s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.623184204s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.167368889s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.623184204s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.166993141s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.623062134s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.166310310s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622604370s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.166106224s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622558594s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.166025162s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622604370s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.165900230s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622558594s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.165554047s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622817993s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.165366173s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622817993s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.165130615s) [2] async=[2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 213.622711182s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=58/59 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.164980888s) [2] r=-1 lpr=61 pi=[53,61)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 213.622711182s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[6.3( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[6.7( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=4 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 61 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=60) [1]/[0] async=[1] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  9 09:38:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  9 09:38:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  9 09:38:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  9 09:38:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:11.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct  9 09:38:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct  9 09:38:11 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093811 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:38:11 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:11 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.150447845s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609497070s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.150403976s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609497070s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.150182724s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609558105s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.150151253s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609558105s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149627686s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609512329s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.2( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149598122s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609512329s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149128914s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609497070s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149096489s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609497070s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149672508s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.610260010s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.149634361s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.610260010s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.148990631s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609817505s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.148949623s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609817505s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.148059845s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609481812s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.148029327s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609481812s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=4 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.147952080s) [1] async=[1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 215.609542847s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=4 ec=53/34 lis/c=60/53 les/c/f=61/55/0 sis=62 pruub=15.147924423s) [1] r=-1 lpr=62 pi=[53,62)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.609542847s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[6.3( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[6.7( v 41'42 lc 35'11 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 62 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:12 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:12 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:38:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:12 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:12.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:12 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:13 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct  9 09:38:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:13.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct  9 09:38:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct  9 09:38:13 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:13 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c002fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct  9 09:38:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct  9 09:38:14 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:14 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c002fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:14.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:14 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:14 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct  9 09:38:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct  9 09:38:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:15 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:38:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:38:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:15.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:38:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:15 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734002910 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Oct  9 09:38:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Oct  9 09:38:16 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:16 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c003ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:16.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:16 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:16 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1f deep-scrub starts
Oct  9 09:38:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1f deep-scrub ok
Oct  9 09:38:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  9 09:38:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  9 09:38:17 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct  9 09:38:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:17.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:17 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:17 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.d scrub starts
Oct  9 09:38:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.d scrub ok
Oct  9 09:38:18 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:18 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.993220329s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 221.555191040s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.993190765s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.555191040s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.4( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.993136406s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=56'1060 lcod 56'1061 mlcod 56'1061 active pruub 221.555191040s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.4( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.993096352s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 unknown NOTIFY pruub 221.555191040s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.992700577s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 221.555023193s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.992686272s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.555023193s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.991698265s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 221.554428101s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 64 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64 pruub=14.991680145s) [2] r=-1 lpr=64 pi=[53,64)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.554428101s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:18 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.4( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 65 pg[10.4( v 56'1062 (0'0,56'1062] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=56'1060 lcod 56'1061 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  9 09:38:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  9 09:38:18 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093818 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:38:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:18.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:18 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:18 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.0 deep-scrub starts
Oct  9 09:38:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.0 deep-scrub ok
Oct  9 09:38:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct  9 09:38:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  9 09:38:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  9 09:38:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  9 09:38:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  9 09:38:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:38:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:19.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:38:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:19 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct  9 09:38:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[6.5( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[6.d( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[10.4( v 56'1062 (0'0,56'1062] local-lis/les=65/66 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=56'1062 lcod 56'1061 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 66 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:20 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct  9 09:38:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  9 09:38:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.4( v 66'1068 (0'0,66'1068] local-lis/les=65/66 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67 pruub=15.846351624s) [2] async=[2] r=-1 lpr=67 pi=[53,67)/1 crt=56'1062 lcod 66'1067 mlcod 66'1067 active pruub 224.549911499s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67 pruub=15.845858574s) [2] async=[2] r=-1 lpr=67 pi=[53,67)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 224.549484253s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=59/60 n=1 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=13.912847519s) [1] r=-1 lpr=67 pi=[59,67)/1 crt=41'42 mlcod 41'42 active pruub 222.616333008s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67 pruub=15.845821381s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.549484253s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=59/60 n=1 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=13.912611961s) [1] r=-1 lpr=67 pi=[59,67)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 222.616333008s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.4( v 66'1068 (0'0,66'1068] local-lis/les=65/66 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67 pruub=15.846072197s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=56'1062 lcod 66'1067 mlcod 0'0 unknown NOTIFY pruub 224.549911499s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.6( v 41'42 (0'0,41'42] local-lis/les=59/60 n=2 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=13.912308693s) [1] r=-1 lpr=67 pi=[59,67)/1 crt=41'42 mlcod 41'42 active pruub 222.616333008s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovering+remapped mbc={255={(0+1)=2}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.6( v 41'42 (0'0,41'42] local-lis/les=59/60 n=2 ec=49/14 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=13.912289619s) [1] r=-1 lpr=67 pi=[59,67)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 222.616333008s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.5( v 41'42 lc 35'6 (0'0,41'42] local-lis/les=66/67 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:20 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 67 pg[6.d( v 41'42 lc 35'7 (0'0,41'42] local-lis/les=66/67 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66) [0] r=0 lpr=66 pi=[57,66)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:20.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:20 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0047e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093820 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:38:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.16( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68 pruub=15.092641830s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 224.550186157s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68 pruub=15.092575073s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.550186157s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.6( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68 pruub=15.091803551s) [2] async=[2] r=-1 lpr=68 pi=[53,68)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 224.550109863s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=65/66 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68 pruub=15.091763496s) [2] r=-1 lpr=68 pi=[53,68)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 224.550109863s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:21 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 68 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=68) [0]/[1] r=-1 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct  9 09:38:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  9 09:38:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  9 09:38:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct  9 09:38:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:38:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:38:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:21 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0047e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:22 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct  9 09:38:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Oct  9 09:38:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Oct  9 09:38:22 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:22 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:22.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:22 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:22 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:23 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct  9 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:23 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 70 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:23 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct  9 09:38:23 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct  9 09:38:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:23.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:23 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:23 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0047e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:24 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct  9 09:38:24 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 71 pg[10.16( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=4 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:24 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 71 pg[10.e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:24 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 71 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:24 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 71 pg[10.6( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=6 ec=53/34 lis/c=68/62 les/c/f=69/63/0 sis=70) [0] r=0 lpr=70 pi=[62,70)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:24 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.f scrub starts
Oct  9 09:38:24 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.f scrub ok
Oct  9 09:38:24 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:24 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740001c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:24.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:24 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:24 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:25 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Oct  9 09:38:25 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Oct  9 09:38:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:25.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:25 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:25 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734003e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:26 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Oct  9 09:38:26 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Oct  9 09:38:26 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:26 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0058e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:26 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct  9 09:38:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 72 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=72) [0] r=0 lpr=72 pi=[61,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 72 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72) [0] r=0 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 72 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72) [0] r=0 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 72 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72) [0] r=0 lpr=72 pi=[60,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  9 09:38:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  9 09:38:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:26.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:26 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:26 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740001c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:27 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Oct  9 09:38:27 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Oct  9 09:38:27 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct  9 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[61,73)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[61,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 73 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=-1 lpr=73 pi=[60,73)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  9 09:38:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  9 09:38:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:38:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:27.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:38:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:27 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:28 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct  9 09:38:28 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct  9 09:38:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:28 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:28 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct  9 09:38:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 74 pg[6.8( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=0 lpr=74 pi=[49,74)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 74 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=12.809546471s) [1] r=-1 lpr=74 pi=[53,74)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 229.555313110s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 74 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=12.809524536s) [1] r=-1 lpr=74 pi=[53,74)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.555313110s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 74 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=12.808170319s) [1] r=-1 lpr=74 pi=[53,74)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 229.554595947s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 74 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=74 pruub=12.808053970s) [1] r=-1 lpr=74 pi=[53,74)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.554595947s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 09:38:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 09:38:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 09:38:28 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 09:38:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:28.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:28 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0058e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:28 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:28 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:38:29 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Oct  9 09:38:29 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Oct  9 09:38:29 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[6.8( v 41'42 (0'0,41'42] local-lis/les=74/75 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=74) [0] r=0 lpr=74 pi=[49,74)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=0 lpr=75 pi=[61,75)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=0 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:38:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:29.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:38:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:29 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740008dc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:30 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Oct  9 09:38:30 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Oct  9 09:38:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:30 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct  9 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[53,75)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=0 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=0 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:30 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  9 09:38:30 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  9 09:38:30 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  9 09:38:30 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  9 09:38:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:30.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:30 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=0 lpr=76 pi=[61,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:30 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 76 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=0 lpr=76 pi=[61,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:31 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct  9 09:38:31 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Oct  9 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.288201332s) [1] async=[1] r=-1 lpr=77 pi=[53,77)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 234.753356934s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.8( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.288148880s) [1] r=-1 lpr=77 pi=[53,77)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.753356934s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.287746429s) [1] async=[1] r=-1 lpr=77 pi=[53,77)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 234.753341675s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.18( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/53 les/c/f=76/55/0 sis=77 pruub=15.287559509s) [1] r=-1 lpr=77 pi=[53,77)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.753341675s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=-1 lpr=77 pi=[61,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=-1 lpr=77 pi=[61,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=-1 lpr=77 pi=[61,77)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:31 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 77 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=-1 lpr=77 pi=[61,77)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:31 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Oct  9 09:38:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:31.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:31 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c0058e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:31 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:38:31 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:31 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:32 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct  9 09:38:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:32 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740008dc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  9 09:38:32 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  9 09:38:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:32.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:32 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:32 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:33 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct  9 09:38:33 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:33 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:33 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:33 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  9 09:38:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  9 09:38:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:33.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:33 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:33 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:33 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.17 deep-scrub starts
Oct  9 09:38:34 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.17 deep-scrub ok
Oct  9 09:38:34 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct  9 09:38:34 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:34 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=0 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:34 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  9 09:38:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  9 09:38:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:34.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:34 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc740008dc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:34 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:34 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:38:34 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct  9 09:38:34 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct  9 09:38:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct  9 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.985170364s) [1] r=-1 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 41'42 active pruub 232.463363647s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.985138893s) [1] r=-1 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 232.463363647s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=0 lpr=81 pi=[60,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 81 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  9 09:38:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  9 09:38:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:35.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:35 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:35 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:35 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct  9 09:38:35 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct  9 09:38:36 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct  9 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:36 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc7340050a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:36.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:36 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:36 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct  9 09:38:36 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct  9 09:38:37 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct  9 09:38:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:38:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:37.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:38:37 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:37 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc74000a250 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:38 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct  9 09:38:38 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct  9 09:38:38 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct  9 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:38 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:38 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:38.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:38 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:38 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:39 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct  9 09:38:39 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct  9 09:38:39 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct  9 09:38:39 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:39 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:39 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:39 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:39.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:39 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:40 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct  9 09:38:40 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct  9 09:38:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:40 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc74000a250 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:40.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:40 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093840 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:38:41 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct  9 09:38:41 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct  9 09:38:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:41.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:41 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:42 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct  9 09:38:42 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct  9 09:38:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:42 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:42 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct  9 09:38:42 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  9 09:38:42 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  9 09:38:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:42.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:42 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc74c002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:43 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct  9 09:38:43 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct  9 09:38:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:43.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  9 09:38:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  9 09:38:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:43 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:43 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct  9 09:38:44 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct  9 09:38:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:44 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:44 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct  9 09:38:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  9 09:38:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  9 09:38:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:44.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:44 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:44 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct  9 09:38:44 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct  9 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:45.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct  9 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:45 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:45 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  9 09:38:45 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  9 09:38:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:45 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:46 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct  9 09:38:46 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct  9 09:38:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:46 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc74c003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct  9 09:38:46 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:46 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:38:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:46.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:38:46 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:46 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:47 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Oct  9 09:38:47 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Oct  9 09:38:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:47.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct  9 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:38:47 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:47 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:48 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct  9 09:38:48 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct  9 09:38:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:48 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c006200 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct  9 09:38:48 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:48 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:48 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:48 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:48.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:48 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:48 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc74c003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:49 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct  9 09:38:49 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct  9 09:38:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:49.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:49 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:49 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct  9 09:38:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct  9 09:38:50 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:50 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc734006590 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:50 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:50.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:50 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:50 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c002850 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:51 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Oct  9 09:38:51 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Oct  9 09:38:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:51.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:51 compute-1 kernel: ganesha.nfsd[20601]: segfault at 50 ip 00007fc7ec5a132e sp 00007fc7a4ff8210 error 4 in libntirpc.so.5.8[7fc7ec586000+2c000] likely on CPU 3 (core 0, socket 3)
Oct  9 09:38:51 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:38:51 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[19552]: 09/10/2025 09:38:51 : epoch 68e78273 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc73c002850 fd 49 proxy ignored for local
Oct  9 09:38:51 compute-1 systemd[1]: Started Process Core Dump (PID 20869/UID 0).
Oct  9 09:38:52 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct  9 09:38:52 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct  9 09:38:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct  9 09:38:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  9 09:38:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  9 09:38:52 compute-1 systemd-coredump[20870]: Process 19556 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007fc7ec5a132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:38:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:52.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:52 compute-1 systemd[1]: systemd-coredump@2-20869-0.service: Deactivated successfully.
Oct  9 09:38:52 compute-1 podman[20878]: 2025-10-09 09:38:52.674243682 +0000 UTC m=+0.018281754 container died 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:38:52 compute-1 systemd[1]: var-lib-containers-storage-overlay-18a7a6480f3b33833b923989e2bfc3794283acd4108411bc5cfd7528ec19f604-merged.mount: Deactivated successfully.
Oct  9 09:38:52 compute-1 podman[20878]: 2025-10-09 09:38:52.690943194 +0000 UTC m=+0.034981266 container remove 92d8510d7f5eeffd250cb678b79fb60f427cdb1189e6d98348855aa647b7ea4d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:38:52 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:38:52 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct  9 09:38:52 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:53 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Oct  9 09:38:53 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Oct  9 09:38:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:38:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:53.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:38:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  9 09:38:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  9 09:38:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct  9 09:38:53 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:54 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct  9 09:38:54 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct  9 09:38:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  9 09:38:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  9 09:38:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct  9 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532474518s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 41'42 active pruub 256.462554932s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823891640s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 active pruub 258.754333496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823626518s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 active pruub 258.754333496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:54 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:54.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:55 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct  9 09:38:55 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct  9 09:38:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:55.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  9 09:38:55 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  9 09:38:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct  9 09:38:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:55 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:56 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Oct  9 09:38:56 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Oct  9 09:38:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct  9 09:38:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:56 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:56.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:57 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct  9 09:38:57 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct  9 09:38:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:38:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:57.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:38:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct  9 09:38:57 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991911888s) [2] async=[2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 40'1059 active pruub 260.950622559s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:57 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:57 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991250038s) [2] async=[2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 40'1059 active pruub 260.950653076s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:57 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093857 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:38:58 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct  9 09:38:58 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct  9 09:38:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct  9 09:38:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:38:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:58.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:38:59 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct  9 09:38:59 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct  9 09:38:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:38:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:38:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:39:00 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct  9 09:39:00 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct  9 09:39:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:00.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:00 compute-1 python3.9[21067]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:39:01 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct  9 09:39:01 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct  9 09:39:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:01.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct  9 09:39:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct  9 09:39:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Oct  9 09:39:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:02.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564367294s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 261.555847168s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:02 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  9 09:39:02 compute-1 python3.9[21355]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  9 09:39:02 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 3.
Oct  9 09:39:02 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:39:02 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:39:02 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct  9 09:39:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct  9 09:39:03 compute-1 podman[21418]: 2025-10-09 09:39:03.118937512 +0000 UTC m=+0.028360970 container create 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:39:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9545c57d508c95a16e86ca7976b27d2867fdcfd0a0e2b4874fd26f06fca97d57/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9545c57d508c95a16e86ca7976b27d2867fdcfd0a0e2b4874fd26f06fca97d57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9545c57d508c95a16e86ca7976b27d2867fdcfd0a0e2b4874fd26f06fca97d57/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9545c57d508c95a16e86ca7976b27d2867fdcfd0a0e2b4874fd26f06fca97d57/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:03 compute-1 podman[21418]: 2025-10-09 09:39:03.164860077 +0000 UTC m=+0.074283555 container init 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:39:03 compute-1 podman[21418]: 2025-10-09 09:39:03.16892631 +0000 UTC m=+0.078349768 container start 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:39:03 compute-1 bash[21418]: 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee
Oct  9 09:39:03 compute-1 podman[21418]: 2025-10-09 09:39:03.107402543 +0000 UTC m=+0.016826021 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:39:03 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:39:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:03 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:39:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:03.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:03 compute-1 python3.9[21599]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  9 09:39:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  9 09:39:03 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Oct  9 09:39:03 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:03 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:03 compute-1 python3.9[21752]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:39:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct  9 09:39:03 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct  9 09:39:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:04.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  9 09:39:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Oct  9 09:39:04 compute-1 python3.9[21904]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  9 09:39:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct  9 09:39:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct  9 09:39:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:05.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Oct  9 09:39:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605948448s) [2] async=[2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 269.640716553s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  9 09:39:05 compute-1 python3.9[22057]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:39:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct  9 09:39:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct  9 09:39:06 compute-1 python3.9[22209]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:39:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:06.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:06 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Oct  9 09:39:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  9 09:39:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  9 09:39:06 compute-1 python3.9[22287]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:39:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct  9 09:39:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct  9 09:39:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:07.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Oct  9 09:39:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Oct  9 09:39:08 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Oct  9 09:39:08 compute-1 python3.9[22440]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  9 09:39:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:08.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Oct  9 09:39:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  9 09:39:08 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  9 09:39:08 compute-1 python3.9[22618]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  9 09:39:09 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct  9 09:39:09 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct  9 09:39:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:09 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:39:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:09 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:39:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:39:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:09.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:39:09 compute-1 python3.9[22772]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 09:39:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Oct  9 09:39:10 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct  9 09:39:10 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct  9 09:39:10 compute-1 python3.9[22924]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  9 09:39:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:10.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Oct  9 09:39:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  9 09:39:10 compute-1 python3.9[23076]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Oct  9 09:39:11 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Oct  9 09:39:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Oct  9 09:39:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:39:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:11.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:39:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  9 09:39:12 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct  9 09:39:12 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct  9 09:39:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Oct  9 09:39:12 compute-1 python3.9[23230]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:39:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:12.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct  9 09:39:13 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct  9 09:39:13 compute-1 python3.9[23382]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:39:13 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Oct  9 09:39:13 compute-1 python3.9[23460]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:39:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:13.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:13 compute-1 python3.9[23613]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:39:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Oct  9 09:39:14 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Oct  9 09:39:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Oct  9 09:39:14 compute-1 python3.9[23691]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:39:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:14.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct  9 09:39:15 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct  9 09:39:15 compute-1 python3.9[23843]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:39:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:15.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[21462]: 09/10/2025 09:39:15 : epoch 68e782b7 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4c18000df0 fd 38 proxy ignored for local
Oct  9 09:39:15 compute-1 kernel: ganesha.nfsd[23846]: segfault at 50 ip 00007f4cc4fc932e sp 00007f4c94ff8210 error 4 in libntirpc.so.5.8[7f4cc4fae000+2c000] likely on CPU 1 (core 0, socket 1)
Oct  9 09:39:15 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:39:15 compute-1 systemd[1]: Started Process Core Dump (PID 23862/UID 0).
Oct  9 09:39:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct  9 09:39:16 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct  9 09:39:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:16.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:16 compute-1 systemd-coredump[23863]: Process 21486 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 41:#012#0  0x00007f4cc4fc932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:39:16 compute-1 systemd[1]: systemd-coredump@3-23862-0.service: Deactivated successfully.
Oct  9 09:39:16 compute-1 podman[23983]: 2025-10-09 09:39:16.712764862 +0000 UTC m=+0.017558139 container died 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:39:16 compute-1 systemd[1]: var-lib-containers-storage-overlay-9545c57d508c95a16e86ca7976b27d2867fdcfd0a0e2b4874fd26f06fca97d57-merged.mount: Deactivated successfully.
Oct  9 09:39:16 compute-1 podman[23983]: 2025-10-09 09:39:16.737792279 +0000 UTC m=+0.042585546 container remove 05e3a39547eba9dcbb0c2432e2280a15f5ad4912a5a2f918b55fec8a16af71ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:39:16 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:39:16 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct  9 09:39:16 compute-1 python3.9[24032]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:39:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct  9 09:39:17 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct  9 09:39:17 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Oct  9 09:39:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  9 09:39:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:17.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:17 compute-1 python3.9[24205]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  9 09:39:17 compute-1 python3.9[24355]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:39:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct  9 09:39:18 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct  9 09:39:18 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  9 09:39:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:18.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct  9 09:39:19 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct  9 09:39:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  9 09:39:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Oct  9 09:39:19 compute-1 python3.9[24507]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:39:19 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  9 09:39:19 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Oct  9 09:39:19 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  9 09:39:19 compute-1 systemd[1]: tuned.service: Consumed 240ms CPU time, 19.1M memory peak, read 4.0M from disk, written 16.0K to disk.
Oct  9 09:39:19 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 09:39:19 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 09:39:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:39:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:19.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:39:19 compute-1 python3.9[24669]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  9 09:39:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Oct  9 09:39:20 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Oct  9 09:39:20 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  9 09:39:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:39:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:20.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:39:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct  9 09:39:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Oct  9 09:39:21 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct  9 09:39:21 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  9 09:39:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:21.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  9 09:39:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Oct  9 09:39:22 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Oct  9 09:39:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:22.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:22 compute-1 python3.9[24822]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:39:23 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  9 09:39:23 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Oct  9 09:39:23 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct  9 09:39:23 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct  9 09:39:23 compute-1 python3.9[24976]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:39:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:39:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:23.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:39:23 compute-1 systemd[1]: session-22.scope: Deactivated successfully.
Oct  9 09:39:23 compute-1 systemd[1]: session-22.scope: Consumed 47.163s CPU time.
Oct  9 09:39:23 compute-1 systemd-logind[798]: Session 22 logged out. Waiting for processes to exit.
Oct  9 09:39:23 compute-1 systemd-logind[798]: Removed session 22.
Oct  9 09:39:24 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct  9 09:39:24 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct  9 09:39:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  9 09:39:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:24.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:25 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct  9 09:39:25 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct  9 09:39:25 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  9 09:39:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Oct  9 09:39:25 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888633728s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 active pruub 286.476196289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:25 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:25.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:26 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Oct  9 09:39:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:26 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:26 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct  9 09:39:26 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct  9 09:39:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  9 09:39:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:26.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:26 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 4.
Oct  9 09:39:26 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:39:26 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:39:27 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Oct  9 09:39:27 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:27 compute-1 podman[25043]: 2025-10-09 09:39:27.124204401 +0000 UTC m=+0.027310940 container create a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:39:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab757a1a3ce8061dfdc6015faefa57dc0a0913c0474d9fbdd1b038c8af3de70/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab757a1a3ce8061dfdc6015faefa57dc0a0913c0474d9fbdd1b038c8af3de70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab757a1a3ce8061dfdc6015faefa57dc0a0913c0474d9fbdd1b038c8af3de70/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab757a1a3ce8061dfdc6015faefa57dc0a0913c0474d9fbdd1b038c8af3de70/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:27 compute-1 podman[25043]: 2025-10-09 09:39:27.164198052 +0000 UTC m=+0.067304611 container init a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:39:27 compute-1 podman[25043]: 2025-10-09 09:39:27.169678601 +0000 UTC m=+0.072785140 container start a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:39:27 compute-1 bash[25043]: a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d
Oct  9 09:39:27 compute-1 podman[25043]: 2025-10-09 09:39:27.112963788 +0000 UTC m=+0.016070347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:39:27 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:39:27 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:27 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:39:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:27.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:28 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Oct  9 09:39:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996621132s) [1] async=[1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 40'1059 active pruub 291.480834961s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:28 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:28.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:28 compute-1 systemd-logind[798]: New session 23 of user zuul.
Oct  9 09:39:28 compute-1 systemd[1]: Started Session 23 of User zuul.
Oct  9 09:39:29 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Oct  9 09:39:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:29.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:29 compute-1 python3.9[25277]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:39:30 compute-1 python3.9[25433]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  9 09:39:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:30.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:31 compute-1 python3.9[25586]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:39:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:39:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:31.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:39:31 compute-1 python3.9[25671]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 09:39:32 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct  9 09:39:32 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct  9 09:39:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:32.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:33 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct  9 09:39:33 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct  9 09:39:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  9 09:39:33 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Oct  9 09:39:33 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:33 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:39:33 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:33 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:39:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:33 compute-1 python3.9[25825]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:34 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct  9 09:39:34 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct  9 09:39:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  9 09:39:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:34.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:35 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct  9 09:39:35 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct  9 09:39:35 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  9 09:39:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Oct  9 09:39:35 compute-1 python3.9[25978]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:39:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:39:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:35.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:39:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455636024s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 active pruub 299.496215820s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:35 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:36 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Oct  9 09:39:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:36 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:36 compute-1 python3.9[26132]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:39:36 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct  9 09:39:36 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct  9 09:39:36 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  9 09:39:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:36 compute-1 python3.9[26284]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  9 09:39:37 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Oct  9 09:39:37 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct  9 09:39:37 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct  9 09:39:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  9 09:39:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  9 09:39:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:37.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:37 compute-1 python3.9[26435]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:39:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:38 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct  9 09:39:38 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct  9 09:39:38 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Oct  9 09:39:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730270386s) [1] async=[1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 40'1059 active pruub 302.409027100s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:38 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:38 compute-1 python3.9[26593]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:38.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:39 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct  9 09:39:39 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:39:39 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Oct  9 09:39:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  9 09:39:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:39.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:39 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:39 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3c8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:39 compute-1 python3.9[26761]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:39:40 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct  9 09:39:40 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct  9 09:39:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:40 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:40 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  9 09:39:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:40.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:40 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:40 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:41 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct  9 09:39:41 compute-1 python3.9[27048]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:39:41 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct  9 09:39:41 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  9 09:39:41 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Oct  9 09:39:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:41.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:41 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538806915s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 active pruub 300.480133057s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:41 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093941 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:39:41 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:41 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3b80016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:41 compute-1 python3.9[27199]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:39:42 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct  9 09:39:42 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct  9 09:39:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:42 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:42 compute-1 python3.9[27353]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:42 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  9 09:39:42 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Oct  9 09:39:42 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:42 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:42.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:42 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:42 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:43 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct  9 09:39:43 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct  9 09:39:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:39:43 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Oct  9 09:39:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:39:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:43.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:39:43 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:43 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:43 compute-1 python3.9[27507]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:44 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Oct  9 09:39:44 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Oct  9 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:44 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3b80021e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:39:44 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Oct  9 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.899759293s) [2] async=[2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 40'1059 active pruub 308.632507324s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.334880) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784334901, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3407, "num_deletes": 251, "total_data_size": 7305240, "memory_usage": 7417344, "flush_reason": "Manual Compaction"}
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Oct  9 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:44 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784345208, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 4787675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7168, "largest_seqno": 10570, "table_properties": {"data_size": 4771843, "index_size": 10214, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4549, "raw_key_size": 42141, "raw_average_key_size": 23, "raw_value_size": 4736781, "raw_average_value_size": 2625, "num_data_blocks": 444, "num_entries": 1804, "num_filter_entries": 1804, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002664, "oldest_key_time": 1760002664, "file_creation_time": 1760002784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 10356 microseconds, and 7658 cpu microseconds.
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.345235) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 4787675 bytes OK
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.345248) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.345828) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.345842) EVENT_LOG_v1 {"time_micros": 1760002784345840, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.345851) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7288352, prev total WAL file size 7288352, number of live WAL files 2.
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.346859) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(4675KB)], [18(12MB)]
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784346875, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18169968, "oldest_snapshot_seqno": -1}
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3979 keys, 14451848 bytes, temperature: kUnknown
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784379015, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 14451848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14419140, "index_size": 21654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 101480, "raw_average_key_size": 25, "raw_value_size": 14340151, "raw_average_value_size": 3603, "num_data_blocks": 936, "num_entries": 3979, "num_filter_entries": 3979, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760002784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.379155) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14451848 bytes
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.379535) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 564.5 rd, 449.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.6, 12.8 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.0) OK, records in: 4503, records dropped: 524 output_compression: NoCompression
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.379549) EVENT_LOG_v1 {"time_micros": 1760002784379542, "job": 8, "event": "compaction_finished", "compaction_time_micros": 32187, "compaction_time_cpu_micros": 22593, "output_level": 6, "num_output_files": 1, "total_output_size": 14451848, "num_input_records": 4503, "num_output_records": 3979, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784380100, "job": 8, "event": "table_file_deletion", "file_number": 20}
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784381486, "job": 8, "event": "table_file_deletion", "file_number": 18}
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.346820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.381508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.381511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.381512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.381513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:39:44.381514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:39:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:44.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:39:44 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:44 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc003060 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:45 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct  9 09:39:45 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct  9 09:39:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Oct  9 09:39:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:39:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:45.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:39:45 compute-1 python3.9[27661]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:39:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:45 compute-1 kernel: ganesha.nfsd[26598]: segfault at 50 ip 00007fc474e1632e sp 00007fc43b7fd210 error 4 in libntirpc.so.5.8[7fc474dfb000+2c000] likely on CPU 0 (core 0, socket 0)
Oct  9 09:39:45 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:39:45 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[25055]: 09/10/2025 09:39:45 : epoch 68e782cf : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc3bc003060 fd 38 proxy ignored for local
Oct  9 09:39:45 compute-1 systemd[1]: Started Process Core Dump (PID 27688/UID 0).
Oct  9 09:39:45 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Oct  9 09:39:46 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Oct  9 09:39:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Oct  9 09:39:46 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:46 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:46 compute-1 python3.9[27817]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct  9 09:39:46 compute-1 systemd-coredump[27689]: Process 25059 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007fc474e1632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:39:46 compute-1 systemd[1]: systemd-coredump@4-27688-0.service: Deactivated successfully.
Oct  9 09:39:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:46.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:46 compute-1 podman[27849]: 2025-10-09 09:39:46.664279495 +0000 UTC m=+0.023649252 container died a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:39:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-0ab757a1a3ce8061dfdc6015faefa57dc0a0913c0474d9fbdd1b038c8af3de70-merged.mount: Deactivated successfully.
Oct  9 09:39:46 compute-1 podman[27849]: 2025-10-09 09:39:46.685734037 +0000 UTC m=+0.045103794 container remove a5e3b34a367621b8a1cb9a528a0439507bd5268852e1dadd9bb23099180a3d9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:39:46 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:39:46 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct  9 09:39:46 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct  9 09:39:46 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct  9 09:39:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Oct  9 09:39:47 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:47 compute-1 systemd[1]: session-23.scope: Deactivated successfully.
Oct  9 09:39:47 compute-1 systemd[1]: session-23.scope: Consumed 13.106s CPU time.
Oct  9 09:39:47 compute-1 systemd-logind[798]: Session 23 logged out. Waiting for processes to exit.
Oct  9 09:39:47 compute-1 systemd-logind[798]: Removed session 23.
Oct  9 09:39:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:39:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:47.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:39:47 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct  9 09:39:47 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct  9 09:39:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:48.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:39:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:49.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:39:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:50.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:51.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:51 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/093951 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:39:52 compute-1 systemd-logind[798]: New session 24 of user zuul.
Oct  9 09:39:52 compute-1 systemd[1]: Started Session 24 of User zuul.
Oct  9 09:39:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:52.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:52 compute-1 python3.9[28141]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:39:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:39:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:53.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:39:53 compute-1 python3.9[28296]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:39:54 compute-1 python3.9[28489]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:39:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:55 compute-1 systemd[1]: session-24.scope: Deactivated successfully.
Oct  9 09:39:55 compute-1 systemd[1]: session-24.scope: Consumed 1.686s CPU time.
Oct  9 09:39:55 compute-1 systemd-logind[798]: Session 24 logged out. Waiting for processes to exit.
Oct  9 09:39:55 compute-1 systemd-logind[798]: Removed session 24.
Oct  9 09:39:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:55.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:56.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:56 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 5.
Oct  9 09:39:56 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:39:56 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:39:57 compute-1 podman[28580]: 2025-10-09 09:39:57.118789565 +0000 UTC m=+0.026235590 container create a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:39:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a7d345292b2823e940a0ef2c8f05233f069a14a6edb83820712d6607dd684d/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a7d345292b2823e940a0ef2c8f05233f069a14a6edb83820712d6607dd684d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a7d345292b2823e940a0ef2c8f05233f069a14a6edb83820712d6607dd684d/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a7d345292b2823e940a0ef2c8f05233f069a14a6edb83820712d6607dd684d/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.douegr-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:39:57 compute-1 podman[28580]: 2025-10-09 09:39:57.166496627 +0000 UTC m=+0.073942673 container init a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct  9 09:39:57 compute-1 podman[28580]: 2025-10-09 09:39:57.170469821 +0000 UTC m=+0.077915846 container start a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:39:57 compute-1 bash[28580]: a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399
Oct  9 09:39:57 compute-1 podman[28580]: 2025-10-09 09:39:57.107880822 +0000 UTC m=+0.015326858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:39:57 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:39:57 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:39:57 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:39:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:57.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:58.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:39:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:00 compute-1 systemd[1]: Starting system activity accounting tool...
Oct  9 09:40:00 compute-1 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 09:40:00 compute-1 systemd[1]: Finished system activity accounting tool.
Oct  9 09:40:00 compute-1 systemd-logind[798]: New session 25 of user zuul.
Oct  9 09:40:00 compute-1 systemd[1]: Started Session 25 of User zuul.
Oct  9 09:40:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:00.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:00 compute-1 ceph-mon[9795]: overall HEALTH_OK
Oct  9 09:40:01 compute-1 python3.9[28790]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:40:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:01.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:01 compute-1 python3.9[28945]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:40:02 compute-1 python3.9[29101]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:40:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:02.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:03 compute-1 python3.9[29185]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:40:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:03 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:40:03 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:03 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:40:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:03.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:04.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:04 compute-1 python3.9[29339]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:40:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:40:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:05.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:40:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:05 compute-1 python3.9[29535]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:06 compute-1 python3.9[29687]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:40:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:40:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:06.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:40:07 compute-1 python3.9[29850]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:07.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:07 compute-1 python3.9[29929]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:08 compute-1 python3.9[30081]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:08 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094008 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:40:08 compute-1 python3.9[30159]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:08.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094009 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [NOTICE] 281/094009 (4) : haproxy version is 2.3.17-d1c9119
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [NOTICE] 281/094009 (4) : path to executable is /usr/local/sbin/haproxy
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [ALERT] 281/094009 (4) : backend 'backend' has no server available!
Oct  9 09:40:09 compute-1 python3.9[30336]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:40:09 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:40:09 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:40:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:40:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:09.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:40:09 compute-1 python3.9[30503]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:09 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:09 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a64000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:10 compute-1 python3.9[30657]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:10 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:10 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:10 compute-1 python3.9[30809]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:10 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:10 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a500034a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:11 compute-1 python3.9[30961]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:40:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:11.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:11 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094011 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:40:11 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:11 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:12 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:40:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:12.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:40:12 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:12 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:13 compute-1 python3.9[31115]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:40:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:40:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:13.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:40:13 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:13 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:13 compute-1 python3.9[31270]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:40:14 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:14 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:14 compute-1 python3.9[31422]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:40:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:14.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:14 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:14 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:15 compute-1 python3.9[31574]: ansible-service_facts Invoked
Oct  9 09:40:15 compute-1 network[31591]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:40:15 compute-1 network[31592]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:40:15 compute-1 network[31593]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:40:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:15.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:15 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:15 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a58002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:16 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:16 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a5c001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:16.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:16 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:16 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:40:16 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:16 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a5c001c40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:17.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:17 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:17 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:18 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:18 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:18.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:18 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:18 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:40:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:19.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:40:19 compute-1 python3.9[32051]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:40:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:19 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a5c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:19 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:40:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:19 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:40:19 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:19 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:40:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:20 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:20.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:20 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:40:20 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:20 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:40:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:21.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:40:21 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:21 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:21 compute-1 python3.9[32205]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  9 09:40:22 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:22 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a5c002b30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:22 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:22 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a580046f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:23 compute-1 python3.9[32357]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:23.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:23 compute-1 python3.9[32436]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:23 compute-1 kernel: ganesha.nfsd[30402]: segfault at 50 ip 00007f6b0f7a932e sp 00007f6ad8ff8210 error 4 in libntirpc.so.5.8[7f6b0f78e000+2c000] likely on CPU 2 (core 0, socket 2)
Oct  9 09:40:23 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:40:23 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr[28592]: 09/10/2025 09:40:23 : epoch 68e782ed : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50007030 fd 38 proxy ignored for local
Oct  9 09:40:23 compute-1 systemd[1]: Started Process Core Dump (PID 32461/UID 0).
Oct  9 09:40:24 compute-1 python3.9[32590]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:24 compute-1 python3.9[32668]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:24 compute-1 systemd-coredump[32462]: Process 28596 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 45:#012#0  0x00007f6b0f7a932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:40:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:24.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:24 compute-1 systemd[1]: systemd-coredump@5-32461-0.service: Deactivated successfully.
Oct  9 09:40:24 compute-1 podman[32700]: 2025-10-09 09:40:24.744090341 +0000 UTC m=+0.017534801 container died a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:40:24 compute-1 systemd[1]: var-lib-containers-storage-overlay-a5a7d345292b2823e940a0ef2c8f05233f069a14a6edb83820712d6607dd684d-merged.mount: Deactivated successfully.
Oct  9 09:40:24 compute-1 podman[32700]: 2025-10-09 09:40:24.761319844 +0000 UTC m=+0.034764295 container remove a4769768d3029ab5da797173bc20fc04ecb1b13fe2b03f5aa021ce964d7fc399 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-0-0-compute-1-douegr, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 09:40:24 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:40:24 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct  9 09:40:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:25.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:25 compute-1 systemd-logind[798]: Session 3 logged out. Waiting for processes to exit.
Oct  9 09:40:25 compute-1 systemd[1]: session-3.scope: Deactivated successfully.
Oct  9 09:40:25 compute-1 systemd[1]: session-3.scope: Consumed 6.293s CPU time.
Oct  9 09:40:25 compute-1 systemd-logind[798]: Removed session 3.
Oct  9 09:40:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:26 compute-1 python3.9[32861]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:26.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:27 compute-1 python3.9[33013]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:40:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:27.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:28 compute-1 python3.9[33098]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:40:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:28.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:29 compute-1 systemd[1]: session-25.scope: Deactivated successfully.
Oct  9 09:40:29 compute-1 systemd[1]: session-25.scope: Consumed 16.852s CPU time.
Oct  9 09:40:29 compute-1 systemd-logind[798]: Session 25 logged out. Waiting for processes to exit.
Oct  9 09:40:29 compute-1 systemd-logind[798]: Removed session 25.
Oct  9 09:40:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094029 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:40:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:29.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094029 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:40:30 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094030 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:40:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:30.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:31.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:32.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:33.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:34 compute-1 systemd-logind[798]: New session 26 of user zuul.
Oct  9 09:40:34 compute-1 systemd[1]: Started Session 26 of User zuul.
Oct  9 09:40:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:40:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:34.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:40:34 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Scheduled restart job, restart counter is at 6.
Oct  9 09:40:34 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:40:34 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Start request repeated too quickly.
Oct  9 09:40:34 compute-1 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.0.0.compute-1.douegr.service: Failed with result 'exit-code'.
Oct  9 09:40:34 compute-1 systemd[1]: Failed to start Ceph nfs.cephfs.0.0.compute-1.douegr for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:40:35 compute-1 python3.9[33308]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:40:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:35.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:40:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:35 compute-1 python3.9[33461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:36 compute-1 python3.9[33539]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:36 compute-1 systemd[1]: session-26.scope: Deactivated successfully.
Oct  9 09:40:36 compute-1 systemd[1]: session-26.scope: Consumed 1.096s CPU time.
Oct  9 09:40:36 compute-1 systemd-logind[798]: Session 26 logged out. Waiting for processes to exit.
Oct  9 09:40:36 compute-1 systemd-logind[798]: Removed session 26.
Oct  9 09:40:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:40:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:36.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:40:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:37.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:38.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:40:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:39.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:40:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:40:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:40.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:40:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:41.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:41 compute-1 systemd-logind[798]: New session 27 of user zuul.
Oct  9 09:40:41 compute-1 systemd[1]: Started Session 27 of User zuul.
Oct  9 09:40:42 compute-1 python3.9[33720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:40:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:43 compute-1 python3.9[33876]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:43.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:44 compute-1 python3.9[34052]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:44 compute-1 python3.9[34130]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.cpgnonn1 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:40:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:44.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:40:45 compute-1 python3.9[34282]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:45 compute-1 python3.9[34361]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.oimnt4t2 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:45.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:46 compute-1 python3.9[34513]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:46 compute-1 python3.9[34665]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:46.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:47 compute-1 python3.9[34743]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:47 compute-1 python3.9[34895]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:47.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:47 compute-1 python3.9[34974]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:48 compute-1 python3.9[35126]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:48.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:48 compute-1 python3.9[35303]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:49 compute-1 python3.9[35381]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:49.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:49 compute-1 python3.9[35534]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:50 compute-1 python3.9[35612]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:50.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:51 compute-1 python3.9[35764]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:40:51 compute-1 systemd[1]: Reloading.
Oct  9 09:40:51 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:40:51 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:40:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:51.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:51 compute-1 python3.9[35955]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:52 compute-1 python3.9[36033]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:52 compute-1 python3.9[36185]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:52.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:52 compute-1 python3.9[36263]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:53.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:53 compute-1 python3.9[36415]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:40:53 compute-1 systemd[1]: Reloading.
Oct  9 09:40:53 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:40:53 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:40:53 compute-1 systemd[1]: Starting Create netns directory...
Oct  9 09:40:53 compute-1 systemd[11486]: Created slice User Background Tasks Slice.
Oct  9 09:40:53 compute-1 systemd[11486]: Starting Cleanup of User's Temporary Files and Directories...
Oct  9 09:40:53 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:40:53 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:40:53 compute-1 systemd[1]: Finished Create netns directory.
Oct  9 09:40:53 compute-1 systemd[11486]: Finished Cleanup of User's Temporary Files and Directories.
Oct  9 09:40:54 compute-1 python3.9[36608]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:40:54 compute-1 network[36625]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:40:54 compute-1 network[36626]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:40:54 compute-1 network[36627]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:40:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:54.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:40:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:55.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:40:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:56.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:57 compute-1 python3.9[36973]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:57.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:57 compute-1 python3.9[37051]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:58 compute-1 python3.9[37203]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:59 compute-1 python3.9[37355]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:40:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:40:59 compute-1 python3.9[37433]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:40:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:40:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:59.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:41:00 compute-1 python3.9[37586]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  9 09:41:00 compute-1 systemd[1]: Starting Time & Date Service...
Oct  9 09:41:00 compute-1 systemd[1]: Started Time & Date Service.
Oct  9 09:41:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:00.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:01 compute-1 python3.9[37742]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:01 compute-1 python3.9[37895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:41:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:01.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:41:01 compute-1 python3.9[37973]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:02 compute-1 python3.9[38150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:02.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:02 compute-1 python3.9[38228]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.w9_u1sar recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:41:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:41:03 compute-1 python3.9[38380]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:41:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:03.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:41:03 compute-1 python3.9[38459]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:04 compute-1 python3.9[38611]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:41:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:04.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:04 compute-1 python3[38764]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 09:41:05 compute-1 python3.9[38917]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:05.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:05 compute-1 python3.9[38995]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:06 compute-1 python3.9[39147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:06 compute-1 python3.9[39225]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:06.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:07 compute-1 python3.9[39377]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:07.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:07 compute-1 python3.9[39456]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:08 compute-1 python3.9[39608]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:08 compute-1 python3.9[39686]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:08.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:09 compute-1 python3.9[39863]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:09 compute-1 python3.9[39941]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:41:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:09.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:41:10 compute-1 python3.9[40094]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:41:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:10 compute-1 python3.9[40249]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:10.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:11 compute-1 python3.9[40401]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:11.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:11 compute-1 python3.9[40554]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:12 compute-1 python3.9[40706]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 09:41:12 compute-1 python3.9[40858]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 09:41:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:41:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:12.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:41:13 compute-1 systemd-logind[798]: Session 27 logged out. Waiting for processes to exit.
Oct  9 09:41:13 compute-1 systemd[1]: session-27.scope: Deactivated successfully.
Oct  9 09:41:13 compute-1 systemd[1]: session-27.scope: Consumed 20.614s CPU time.
Oct  9 09:41:13 compute-1 systemd-logind[798]: Removed session 27.
Oct  9 09:41:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:41:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:41:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:14.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:41:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:15.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:41:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:16.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:17.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:17 compute-1 systemd-logind[798]: New session 28 of user zuul.
Oct  9 09:41:17 compute-1 systemd[1]: Started Session 28 of User zuul.
Oct  9 09:41:18 compute-1 python3.9[41041]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  9 09:41:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:18.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:19 compute-1 python3.9[41193]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:41:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:41:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:19.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:41:19 compute-1 python3.9[41348]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct  9 09:41:20 compute-1 python3.9[41500]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.x64l6g1a follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:20 compute-1 python3.9[41625]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.x64l6g1a mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002879.758989-103-7766229410158/.source.x64l6g1a _original_basename=.b_7yq3z5 follow=False checksum=231ee42d81be70362d898b48675a8dc8dc6887b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:20.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:21 compute-1 python3.9[41777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:41:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:21.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:22 compute-1 python3.9[41930]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEdAe+aHzafP9dhAtdIAtOm2sC12803SCpA/3rl1ydGqAiReivZh0j/TO2wBzoqsan7nzM7eG4TWSpqK+0ZBgBjrUjB9Cj1eCLSLOLFpIUpLcs70zpiXFEg4VCxifit+r7hVmAjbLpb7lUOEBeuKAC+NijlzOD2XrC+yd3AhBkIuX/kEOqNS457QburXRcER973lXO7bXpB0owCrgGAzOsy1i7FT6Zz4mSB7l2Iy2drh0BXBPs+laJ9chzaIYm3t6/xdGegDzZd9R0R/aKxaO2CGff8by/bJ8Ga/DZNziOBiuIImaU3kBJc76SWraZeoiOMwDTosKuZfFadJWywRHIP1xUSkKdLGnB0MzpGtOhcIWX642g/WIM4+Y078U5nwtvOcNHpA/uT9uRc7nBCEzPpJVHtyVbh0kQ9x86pCj83Ph6ZZ1RPGolhJ6oztdGyl5QMj/rkG45+H83p9c18d5vzsZzrcKaYtBEg3BJ80PfCqFw5Al9hHq/55Yd0D5PiK8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN+sxaZ1V99vc+E5ar8KEv4Hqy68kJM/buHn1/XxovLr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDc5CVbyus+PfQGnwFQkfkACIJgIJPRc/fJ1ooz9D/2T/S79sUKftWyZ1JOurJ8lQdLc+LgRGezTzhfuY3R3F6E=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCow+01n6Hl7e4y/xRpTIYbwm1BUam3jmz5ScpeEvosFn7TfszdHV/Do5gTioKon9F6x7Kn2fhkWobIt7rTveNaK0lE2p35tJDQJQ5zYJD3N4aWHdvfaigYEXYaH3OOpmqEhRw/IyxGzW1MS8OfGUNyziUYt99LLYhcEkDneuZnPOI2444OzzU0pYxCtaVSevz9aDR2yi9BWKNIP8iMTNqu9UpE9IaOANEDrZu7gbGMBTDiR1lYzo1peJrtAa/cpTF9DoFnddTbpOMLjd6HaRrnifcc9fP1YtxWn8T1ldTjecUUCp2yo6ycdOUdBiJG9yWw1gI7SXYjeHJbX/1QS6HWd5DWxJFbSf0zP5d5BWyDf5+TFu1/gImUA0HT8WOYb4tm1QH1NAThcRLvtUFg32CcbqOnUyAxW0wDeGoLCW7EERN9OKr11fwlYjdyW/TbqYWRn0J2WhZa4OoZ/C4m9ug6PP7SEo9wXLqN9t4eArVkbeTemzPigVRqNrD2eywEU4k=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCkglmiqZQwqqMItgWA6O04td1K/U4vAgm36NE9rj3U#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLD7v/1C4ThvDcQi8c4DTsjkszkaGHBX0ZNWy5MwKVH3Qt7bVSlXkD8SB3/nhOUlBIzdAK/JQpzVyqfy+61YZMk=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKE7qnQSdbsdsOaGWRokEAHfuZHqF4BkfkIlbsIxi6+FzXfmziMPrsg1PoVUBFOzaP55y6aRtUEaXoCsB+KxPGXhHnh3IdEYTUa5EvJs6/mUlEqIwltt8CLNKUrDV6N38V1v5gaRPIAI5iTwtbap14q+0iDF8MVi8MPKlkqoL/+Z49sJ4HqR31EZpD4cWKso/dkKZQSuVQg+TgJ3bnUKIRYPDS7fjVuZpr0KMyU+v4wjBKXvles8lctvRXdfpY2/33XtBG2af+p/+5mg47b5ylWC3wISLO590WzC4X2T0Pv1a6I9O/Dt3V8xyTfzbqi4ia9/kwNBJg1GGqNBssdedHK3AZDOTSd9U+/C1R9oBDXZ7nSo3hIzMQvrm5DXkthix56gd3x9MrMMzc+wTlFtlm2XwpMg7PtdxMZK++rIfPVxzKXBBQsdDd0W3cbam616N/XERaDJKIUqnPe5sE1qhpaFt8aNtwg+buZpYK5ubLbuJZpASgSC6dIuDsEIk6Af8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtxusJG2g5S2RnWLxtcDjdiTuv+VWibld9MVjIgPUzn#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG1pQwHgci56FauRELJKl6O8ntBVH1APLVaVNPCodlG/V+A+h79tYrSqi3QKycc18niRc7Eiq8wWQ8VbX+OhkmY=#012 create=True mode=0644 path=/tmp/ansible.x64l6g1a state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:22.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:22 compute-1 python3.9[42082]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.x64l6g1a' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:41:23 compute-1 python3.9[42236]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.x64l6g1a state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:23.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:23 compute-1 systemd[1]: session-28.scope: Deactivated successfully.
Oct  9 09:41:23 compute-1 systemd[1]: session-28.scope: Consumed 3.502s CPU time.
Oct  9 09:41:23 compute-1 systemd-logind[798]: Session 28 logged out. Waiting for processes to exit.
Oct  9 09:41:23 compute-1 systemd-logind[798]: Removed session 28.
Oct  9 09:41:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:41:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:25.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:41:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:26.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:27.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:41:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:41:29 compute-1 systemd-logind[798]: New session 29 of user zuul.
Oct  9 09:41:29 compute-1 systemd[1]: Started Session 29 of User zuul.
Oct  9 09:41:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:29.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:29 compute-1 python3.9[42443]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:41:30 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 09:41:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:41:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:30.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:41:30 compute-1 python3.9[42601]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  9 09:41:31 compute-1 python3.9[42755]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:41:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:31.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:32 compute-1 python3.9[42909]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:41:32 compute-1 python3.9[43062]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:41:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:32.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:33 compute-1 python3.9[43214]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:33.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:33 compute-1 systemd[1]: session-29.scope: Deactivated successfully.
Oct  9 09:41:33 compute-1 systemd[1]: session-29.scope: Consumed 2.716s CPU time.
Oct  9 09:41:33 compute-1 systemd-logind[798]: Session 29 logged out. Waiting for processes to exit.
Oct  9 09:41:33 compute-1 systemd-logind[798]: Removed session 29.
Oct  9 09:41:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:34.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:35.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:36 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094136 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:41:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:36.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:37.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:38.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:38 compute-1 systemd-logind[798]: New session 30 of user zuul.
Oct  9 09:41:38 compute-1 systemd[1]: Started Session 30 of User zuul.
Oct  9 09:41:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:39 compute-1 python3.9[43396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:41:40 compute-1 python3.9[43552]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:41:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:40.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:41 compute-1 python3.9[43636]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 09:41:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:42 compute-1 python3.9[43788]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:41:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:41:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:42.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:41:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:44 compute-1 python3.9[43940]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:41:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:41:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:44.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:41:45 compute-1 python3.9[44090]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:41:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:41:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:45.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:41:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:45 compute-1 python3.9[44241]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:41:46 compute-1 systemd[1]: session-30.scope: Deactivated successfully.
Oct  9 09:41:46 compute-1 systemd[1]: session-30.scope: Consumed 4.180s CPU time.
Oct  9 09:41:46 compute-1 systemd-logind[798]: Session 30 logged out. Waiting for processes to exit.
Oct  9 09:41:46 compute-1 systemd-logind[798]: Removed session 30.
Oct  9 09:41:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:46.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:48.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:49.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:50.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:51 compute-1 systemd-logind[798]: New session 31 of user zuul.
Oct  9 09:41:51 compute-1 systemd[1]: Started Session 31 of User zuul.
Oct  9 09:41:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:51.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:51 compute-1 python3.9[44447]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:41:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:52.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:53 compute-1 python3.9[44603]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:41:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:53.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:53 compute-1 python3.9[44756]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:41:54 compute-1 python3.9[44908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:54.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:54 compute-1 python3.9[45031]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002913.8946865-155-78738292739409/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=31b5b94d01ae58766b61e67f4ae5ae5ba2535471 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:55 compute-1 python3.9[45183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:55 compute-1 python3.9[45307]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002915.025151-155-150111778946896/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=7fbde074fa214bc5bd2f230fec0e2b862212f741 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:56 compute-1 python3.9[45459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:56 compute-1 python3.9[45582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002915.8521652-155-52260362495654/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=fb704c0fb95908366e9ed9140b8909cf655bf6db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:41:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:41:57 compute-1 python3.9[45734]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:41:57 compute-1 python3.9[45887]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:41:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:57.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:57 compute-1 python3.9[46039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:58 compute-1 python3.9[46162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002917.6733348-334-163427769271089/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=f365cde10b4ba3f96d84c57378143e4d603806bd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:58.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:58 compute-1 python3.9[46314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:59 compute-1 python3.9[46437]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002918.5032628-334-120523356016562/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=40a9a855a5eba48419e934a92216fa818ce139fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:41:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:59.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:59 compute-1 python3.9[46590]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:00 compute-1 python3.9[46713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002919.3126118-334-176926975617038/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=12c7ec31274cfe83e058e95358eb8d7740905632 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:00 compute-1 python3.9[46865]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:00.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:01 compute-1 python3.9[47017]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:01 compute-1 python3.9[47169]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:01.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:01 compute-1 python3.9[47293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002921.143547-511-94901261412329/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=1158a4e160417c2d76a4d5879579d5453669b3a7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:02 compute-1 python3.9[47495]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:02 compute-1 python3.9[47647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002921.9512398-511-23394880291312/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=40a9a855a5eba48419e934a92216fa818ce139fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:02.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:03 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:42:03 compute-1 python3.9[47799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:03 compute-1 python3.9[47923]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002922.7967257-511-156324773257120/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=2f6cee9bc263aba4c5c7fdb1bdebce05af2d6b8b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:03.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:04 compute-1 python3.9[48075]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:42:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:04.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:42:04 compute-1 python3.9[48227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:05 compute-1 python3.9[48350]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002924.5537596-710-247727102063491/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:05.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:05 compute-1 python3.9[48503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:06 compute-1 python3.9[48680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:06 compute-1 python3.9[48803]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002925.9103842-779-137273690439715/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:06.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:07 compute-1 python3.9[48955]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:07 compute-1 python3.9[49108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:07.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:07 compute-1 python3.9[49231]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002927.2468543-847-38712297183620/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:08 compute-1 python3.9[49383]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:42:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:08.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:42:08 compute-1 python3.9[49535]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:09 compute-1 python3.9[49683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002928.6040115-918-84245676733233/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:42:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:09.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:42:09 compute-1 python3.9[49836]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:10 compute-1 python3.9[49988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:10 compute-1 python3.9[50111]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002929.9691763-990-172796485269759/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.003000032s ======
Oct  9 09:42:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:10.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000032s
Oct  9 09:42:11 compute-1 python3.9[50263]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:11.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:11 compute-1 python3.9[50416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:12 compute-1 python3.9[50539]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002931.363108-1061-200004042767018/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:12.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:13 compute-1 systemd[1]: session-31.scope: Deactivated successfully.
Oct  9 09:42:13 compute-1 systemd[1]: session-31.scope: Consumed 15.979s CPU time.
Oct  9 09:42:13 compute-1 systemd-logind[798]: Session 31 logged out. Waiting for processes to exit.
Oct  9 09:42:13 compute-1 systemd-logind[798]: Removed session 31.
Oct  9 09:42:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:42:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:13.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:42:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:14.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:15.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:16.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:17.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:18 compute-1 systemd-logind[798]: New session 32 of user zuul.
Oct  9 09:42:18 compute-1 systemd[1]: Started Session 32 of User zuul.
Oct  9 09:42:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:42:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:42:19 compute-1 python3.9[50722]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:19 compute-1 python3.9[50875]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:20 compute-1 python3.9[50998]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002939.2689123-63-46834724787835/.source.conf _original_basename=ceph.conf follow=False checksum=8b7272e0630e6cb598e773121c6b56dda1c87bf8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:20 compute-1 python3.9[51150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:20.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:21 compute-1 python3.9[51273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002940.3621464-63-97115789464381/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f2b8c5d3158b549e18e5631f97d7800b8ceae49e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:21 compute-1 systemd-logind[798]: Session 32 logged out. Waiting for processes to exit.
Oct  9 09:42:21 compute-1 systemd[1]: session-32.scope: Deactivated successfully.
Oct  9 09:42:21 compute-1 systemd[1]: session-32.scope: Consumed 1.863s CPU time.
Oct  9 09:42:21 compute-1 systemd-logind[798]: Removed session 32.
Oct  9 09:42:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:21.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:22.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:23.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:24.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:25.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:26 compute-1 systemd-logind[798]: New session 33 of user zuul.
Oct  9 09:42:26 compute-1 systemd[1]: Started Session 33 of User zuul.
Oct  9 09:42:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:27 compute-1 python3.9[51454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:42:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:27.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:28 compute-1 python3.9[51611]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:28 compute-1 python3.9[51763]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:28.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [WARNING] 281/094229 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:42:29 compute-1 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo[16451]: [ALERT] 281/094229 (4) : backend 'backend' has no server available!
Oct  9 09:42:29 compute-1 python3.9[51938]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:42:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:29.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:29 compute-1 python3.9[52091]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  9 09:42:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:30.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:31 compute-1 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=2 res=1
Oct  9 09:42:31 compute-1 python3.9[52253]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:42:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:42:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:31.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:42:32 compute-1 python3.9[52337]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:42:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:32.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:33.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:34 compute-1 python3.9[52491]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:42:34 compute-1 python3[52646]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  9 09:42:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:34.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:35 compute-1 python3.9[52798]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:35.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:35 compute-1 python3.9[52951]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:36 compute-1 python3.9[53029]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:36 compute-1 python3.9[53181]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:36.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:37 compute-1 python3.9[53259]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.l_3qc9j0 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:37 compute-1 python3.9[53412]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:37.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:37 compute-1 python3.9[53490]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:38 compute-1 python3.9[53642]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:38.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:39 compute-1 python3[53795]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 09:42:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:39.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:39 compute-1 python3.9[53948]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:40 compute-1 python3.9[54073]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002959.3872094-432-31671337278761/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:40 compute-1 python3.9[54225]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:41 compute-1 python3.9[54350]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002960.4959867-477-25198386682782/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:41.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:41 compute-1 python3.9[54503]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:42 compute-1 python3.9[54628]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002961.6199763-522-248912768142839/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:42.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:42 compute-1 python3.9[54780]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:43 compute-1 python3.9[54905]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002962.575008-567-153886351382436/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:43.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:43 compute-1 python3.9[55058]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:44 compute-1 python3.9[55183]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002963.5462565-612-180697205211331/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:44.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:45 compute-1 python3.9[55335]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:45 compute-1 python3.9[55488]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:45.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:46 compute-1 python3.9[55643]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:46.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:46 compute-1 python3.9[55795]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:47 compute-1 python3.9[55949]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:42:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:47.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:48 compute-1 python3.9[56103]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:48 compute-1 python3.9[56258]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:48.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:49.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:49 compute-1 python3.9[56434]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:42:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:50 compute-1 python3.9[56587]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:50 compute-1 ovs-vsctl[56588]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  9 09:42:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:50.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:51 compute-1 python3.9[56740]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:51.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:51 compute-1 python3.9[56896]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:51 compute-1 ovs-vsctl[56897]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  9 09:42:52 compute-1 python3.9[57047]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:42:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:42:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:42:52 compute-1 python3.9[57201]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:53 compute-1 python3.9[57353]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:53.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:53 compute-1 python3.9[57432]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:54 compute-1 python3.9[57584]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:54 compute-1 python3.9[57662]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:55 compute-1 python3.9[57814]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:55.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:55 compute-1 python3.9[57967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:56 compute-1 python3.9[58045]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:56 compute-1 python3.9[58197]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:56.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:57 compute-1 python3.9[58275]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:57 compute-1 python3.9[58428]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:42:57 compute-1 systemd[1]: Reloading.
Oct  9 09:42:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:57.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:57 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:42:57 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:42:58 compute-1 python3.9[58618]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:58 compute-1 python3.9[58696]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:42:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:58.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:42:59 compute-1 python3.9[58848]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:59 compute-1 python3.9[58927]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:42:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:59.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:00 compute-1 python3.9[59079]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:00 compute-1 systemd[1]: Reloading.
Oct  9 09:43:00 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:00 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:00 compute-1 systemd[1]: Starting Create netns directory...
Oct  9 09:43:00 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:43:00 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:43:00 compute-1 systemd[1]: Finished Create netns directory.
Oct  9 09:43:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:43:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:00.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:43:01 compute-1 python3.9[59273]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:01 compute-1 python3.9[59426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:01.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:01 compute-1 python3.9[59549]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002981.2302542-1365-270395360166098/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:02 compute-1 python3.9[59701]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:02.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:03 compute-1 python3.9[59853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:03.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:03 compute-1 python3.9[59977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002983.022078-1440-257267231829686/.source.json _original_basename=.c4torahg follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:04 compute-1 python3.9[60129]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:04.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:05.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:05 compute-1 python3.9[60557]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  9 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:43:06 compute-1 python3.9[60788]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:43:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:43:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:06.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:43:07 compute-1 python3.9[60941]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:43:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:43:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:07.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:43:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:43:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:08.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:43:08 compute-1 python3[61112]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:43:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:09.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:10.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.123116) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991123159, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2304, "num_deletes": 250, "total_data_size": 6187562, "memory_usage": 6268800, "flush_reason": "Manual Compaction"}
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991129575, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2417462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10575, "largest_seqno": 12874, "table_properties": {"data_size": 2410776, "index_size": 3500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17020, "raw_average_key_size": 20, "raw_value_size": 2395934, "raw_average_value_size": 2852, "num_data_blocks": 156, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002785, "oldest_key_time": 1760002785, "file_creation_time": 1760002991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 6495 microseconds, and 4038 cpu microseconds.
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.129617) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2417462 bytes OK
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.129629) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130202) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130222) EVENT_LOG_v1 {"time_micros": 1760002991130218, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130234) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6177364, prev total WAL file size 6177364, number of live WAL files 2.
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.131133) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2360KB)], [21(13MB)]
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991131156, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16869310, "oldest_snapshot_seqno": -1}
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4398 keys, 14824005 bytes, temperature: kUnknown
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991172429, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14824005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14789856, "index_size": 22071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 110473, "raw_average_key_size": 25, "raw_value_size": 14704904, "raw_average_value_size": 3343, "num_data_blocks": 954, "num_entries": 4398, "num_filter_entries": 4398, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760002991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.172579) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14824005 bytes
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.172987) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 408.4 rd, 358.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 13.8 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(13.1) write-amplify(6.1) OK, records in: 4819, records dropped: 421 output_compression: NoCompression
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.173002) EVENT_LOG_v1 {"time_micros": 1760002991172995, "job": 10, "event": "compaction_finished", "compaction_time_micros": 41306, "compaction_time_cpu_micros": 20066, "output_level": 6, "num_output_files": 1, "total_output_size": 14824005, "num_input_records": 4819, "num_output_records": 4398, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991173313, "job": 10, "event": "table_file_deletion", "file_number": 23}
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991174872, "job": 10, "event": "table_file_deletion", "file_number": 21}
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.131087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.174891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.174893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.174894) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.174895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:11.174896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:11.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:12.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:13.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:13 compute-1 podman[61123]: 2025-10-09 09:43:13.934444118 +0000 UTC m=+4.951437246 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  9 09:43:14 compute-1 podman[61275]: 2025-10-09 09:43:14.031019155 +0000 UTC m=+0.033290766 container create 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Oct  9 09:43:14 compute-1 podman[61275]: 2025-10-09 09:43:14.013616365 +0000 UTC m=+0.015887986 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  9 09:43:14 compute-1 python3[61112]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.433923) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994434292, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 290, "num_deletes": 251, "total_data_size": 122966, "memory_usage": 129496, "flush_reason": "Manual Compaction"}
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994435267, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 80942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12879, "largest_seqno": 13164, "table_properties": {"data_size": 79032, "index_size": 138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4703, "raw_average_key_size": 17, "raw_value_size": 75309, "raw_average_value_size": 278, "num_data_blocks": 6, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002992, "oldest_key_time": 1760002992, "file_creation_time": 1760002994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 1368 microseconds, and 526 cpu microseconds.
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435292) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 80942 bytes OK
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435301) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435832) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435843) EVENT_LOG_v1 {"time_micros": 1760002994435840, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.435849) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 120817, prev total WAL file size 120817, number of live WAL files 2.
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.436274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(79KB)], [24(14MB)]
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994436298, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 14904947, "oldest_snapshot_seqno": -1}
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4158 keys, 11558194 bytes, temperature: kUnknown
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994466823, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11558194, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11527322, "index_size": 19370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 106424, "raw_average_key_size": 25, "raw_value_size": 11448227, "raw_average_value_size": 2753, "num_data_blocks": 828, "num_entries": 4158, "num_filter_entries": 4158, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760002994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.467174) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11558194 bytes
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.467646) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 484.3 rd, 375.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 14.1 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(326.9) write-amplify(142.8) OK, records in: 4668, records dropped: 510 output_compression: NoCompression
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.467660) EVENT_LOG_v1 {"time_micros": 1760002994467654, "job": 12, "event": "compaction_finished", "compaction_time_micros": 30779, "compaction_time_cpu_micros": 17096, "output_level": 6, "num_output_files": 1, "total_output_size": 11558194, "num_input_records": 4668, "num_output_records": 4158, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994467915, "job": 12, "event": "table_file_deletion", "file_number": 26}
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994469500, "job": 12, "event": "table_file_deletion", "file_number": 24}
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.436232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-1 python3.9[61455]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:43:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:14.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:15 compute-1 python3.9[61609]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:15.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:15 compute-1 python3.9[61686]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:43:16 compute-1 python3.9[61837]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002995.7959394-1704-238178989776320/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:16 compute-1 python3.9[61913]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:43:16 compute-1 systemd[1]: Reloading.
Oct  9 09:43:16 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:16 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:16.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:17 compute-1 python3.9[62024]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:17 compute-1 systemd[1]: Reloading.
Oct  9 09:43:17 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:17 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:17 compute-1 systemd[1]: Starting ovn_controller container...
Oct  9 09:43:17 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:43:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da26621943a776b8505fa56f3ae642147bf08deae6a1d60d99cb5dc80cb7ecac/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:17 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e.
Oct  9 09:43:17 compute-1 podman[62068]: 2025-10-09 09:43:17.712487021 +0000 UTC m=+0.073355957 container init 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  9 09:43:17 compute-1 ovn_controller[62080]: + sudo -E kolla_set_configs
Oct  9 09:43:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:17.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:17 compute-1 podman[62068]: 2025-10-09 09:43:17.734529896 +0000 UTC m=+0.095398833 container start 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:43:17 compute-1 edpm-start-podman-container[62068]: ovn_controller
Oct  9 09:43:17 compute-1 systemd[1]: Created slice User Slice of UID 0.
Oct  9 09:43:17 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  9 09:43:17 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  9 09:43:17 compute-1 systemd[1]: Starting User Manager for UID 0...
Oct  9 09:43:17 compute-1 edpm-start-podman-container[62067]: Creating additional drop-in dependency for "ovn_controller" (36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e)
Oct  9 09:43:17 compute-1 systemd[1]: Reloading.
Oct  9 09:43:17 compute-1 podman[62087]: 2025-10-09 09:43:17.828280672 +0000 UTC m=+0.085322312 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:43:17 compute-1 systemd[62108]: Queued start job for default target Main User Target.
Oct  9 09:43:17 compute-1 systemd[62108]: Created slice User Application Slice.
Oct  9 09:43:17 compute-1 systemd[62108]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  9 09:43:17 compute-1 systemd[62108]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:43:17 compute-1 systemd[62108]: Reached target Paths.
Oct  9 09:43:17 compute-1 systemd[62108]: Reached target Timers.
Oct  9 09:43:17 compute-1 systemd[62108]: Starting D-Bus User Message Bus Socket...
Oct  9 09:43:17 compute-1 systemd[62108]: Starting Create User's Volatile Files and Directories...
Oct  9 09:43:17 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:17 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:17 compute-1 systemd[62108]: Finished Create User's Volatile Files and Directories.
Oct  9 09:43:17 compute-1 systemd[62108]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:43:17 compute-1 systemd[62108]: Reached target Sockets.
Oct  9 09:43:17 compute-1 systemd[62108]: Reached target Basic System.
Oct  9 09:43:17 compute-1 systemd[62108]: Reached target Main User Target.
Oct  9 09:43:17 compute-1 systemd[62108]: Startup finished in 110ms.
Oct  9 09:43:18 compute-1 systemd[1]: Started User Manager for UID 0.
Oct  9 09:43:18 compute-1 systemd[1]: Started ovn_controller container.
Oct  9 09:43:18 compute-1 systemd[1]: 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e-4fb4dfcd51813fe6.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:43:18 compute-1 systemd[1]: 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e-4fb4dfcd51813fe6.service: Failed with result 'exit-code'.
Oct  9 09:43:18 compute-1 systemd[1]: Started Session c1 of User root.
Oct  9 09:43:18 compute-1 ovn_controller[62080]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:43:18 compute-1 ovn_controller[62080]: INFO:__main__:Validating config file
Oct  9 09:43:18 compute-1 ovn_controller[62080]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:43:18 compute-1 ovn_controller[62080]: INFO:__main__:Writing out command to execute
Oct  9 09:43:18 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  9 09:43:18 compute-1 ovn_controller[62080]: ++ cat /run_command
Oct  9 09:43:18 compute-1 ovn_controller[62080]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  9 09:43:18 compute-1 ovn_controller[62080]: + ARGS=
Oct  9 09:43:18 compute-1 ovn_controller[62080]: + sudo kolla_copy_cacerts
Oct  9 09:43:18 compute-1 systemd[1]: Started Session c2 of User root.
Oct  9 09:43:18 compute-1 ovn_controller[62080]: + [[ ! -n '' ]]
Oct  9 09:43:18 compute-1 ovn_controller[62080]: + . kolla_extend_start
Oct  9 09:43:18 compute-1 ovn_controller[62080]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  9 09:43:18 compute-1 ovn_controller[62080]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  9 09:43:18 compute-1 ovn_controller[62080]: + umask 0022
Oct  9 09:43:18 compute-1 ovn_controller[62080]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  9 09:43:18 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  9 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1541] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  9 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1546] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1555] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  9 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1559] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  9 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1561] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  9 09:43:18 compute-1 kernel: br-int: entered promiscuous mode
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00022|main|INFO|OVS feature set changed, force recompute.
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-1 ovn_controller[62080]: 2025-10-09T09:43:18Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1680] manager: (ovn-c24bec-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  9 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1685] manager: (ovn-fc69d3-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct  9 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1688] manager: (ovn-ef2171-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct  9 09:43:18 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Oct  9 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1807] device (genev_sys_6081): carrier: link connected
Oct  9 09:43:18 compute-1 NetworkManager[982]: <info>  [1760002998.1809] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Oct  9 09:43:18 compute-1 systemd-udevd[62218]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:43:18 compute-1 systemd-udevd[62214]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:43:18 compute-1 python3.9[62344]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:43:18 compute-1 ovs-vsctl[62345]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  9 09:43:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:43:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:18.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:43:19 compute-1 python3.9[62497]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:43:19 compute-1 ovs-vsctl[62499]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  9 09:43:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:19.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:19 compute-1 python3.9[62653]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:43:19 compute-1 ovs-vsctl[62654]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  9 09:43:20 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Oct  9 09:43:20 compute-1 systemd[1]: session-33.scope: Consumed 40.888s CPU time.
Oct  9 09:43:20 compute-1 systemd-logind[798]: Session 33 logged out. Waiting for processes to exit.
Oct  9 09:43:20 compute-1 systemd-logind[798]: Removed session 33.
Oct  9 09:43:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:20.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:21.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:22.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:23.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:24.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:24 compute-1 systemd-logind[798]: New session 35 of user zuul.
Oct  9 09:43:24 compute-1 systemd[1]: Started Session 35 of User zuul.
Oct  9 09:43:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:25 compute-1 python3.9[62835]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:43:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:25.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:26 compute-1 python3.9[62991]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:26.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:27 compute-1 python3.9[63143]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:27 compute-1 python3.9[63296]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:27.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:28 compute-1 python3.9[63448]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:28 compute-1 systemd[1]: Stopping User Manager for UID 0...
Oct  9 09:43:28 compute-1 systemd[62108]: Activating special unit Exit the Session...
Oct  9 09:43:28 compute-1 systemd[62108]: Stopped target Main User Target.
Oct  9 09:43:28 compute-1 systemd[62108]: Stopped target Basic System.
Oct  9 09:43:28 compute-1 systemd[62108]: Stopped target Paths.
Oct  9 09:43:28 compute-1 systemd[62108]: Stopped target Sockets.
Oct  9 09:43:28 compute-1 systemd[62108]: Stopped target Timers.
Oct  9 09:43:28 compute-1 systemd[62108]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:43:28 compute-1 systemd[62108]: Closed D-Bus User Message Bus Socket.
Oct  9 09:43:28 compute-1 systemd[62108]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:43:28 compute-1 systemd[62108]: Removed slice User Application Slice.
Oct  9 09:43:28 compute-1 systemd[62108]: Reached target Shutdown.
Oct  9 09:43:28 compute-1 systemd[62108]: Finished Exit the Session.
Oct  9 09:43:28 compute-1 systemd[62108]: Reached target Exit the Session.
Oct  9 09:43:28 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Oct  9 09:43:28 compute-1 systemd[1]: Stopped User Manager for UID 0.
Oct  9 09:43:28 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  9 09:43:28 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  9 09:43:28 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  9 09:43:28 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  9 09:43:28 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Oct  9 09:43:28 compute-1 python3.9[63601]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:28.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:29 compute-1 python3.9[63751]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:43:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:29 compute-1 python3.9[63929]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  9 09:43:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:30 compute-1 python3.9[64079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:30.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:31 compute-1 python3.9[64200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003010.4133956-219-103154044020975/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:31 compute-1 python3.9[64351]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:32 compute-1 python3.9[64472]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003011.5631163-264-280003673628122/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:32.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:33 compute-1 python3.9[64624]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:43:33 compute-1 python3.9[64709]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:43:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:33.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:34.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:35 compute-1 python3.9[64862]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:43:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:35.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:35 compute-1 python3.9[65016]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:36 compute-1 python3.9[65138]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003015.654522-375-50066069519952/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:36 compute-1 python3.9[65288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:36.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:37 compute-1 python3.9[65409]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003016.5177639-375-6988924721572/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:43:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:37.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:43:38 compute-1 python3.9[65560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:38 compute-1 python3.9[65681]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003017.9636252-507-182668819495583/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:38.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:39 compute-1 python3.9[65831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:39 compute-1 python3.9[65952]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003018.7319849-507-34303956305945/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:39.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:40 compute-1 python3.9[66103]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:43:40 compute-1 python3.9[66257]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:40.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:41 compute-1 python3.9[66409]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:41 compute-1 python3.9[66487]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:41.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:41 compute-1 python3.9[66640]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:42 compute-1 python3.9[66718]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:42 compute-1 python3.9[66870]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:42.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:43 compute-1 python3.9[67022]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:43 compute-1 python3.9[67101]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:43.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:44 compute-1 python3.9[67253]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:44 compute-1 python3.9[67331]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:44.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:44 compute-1 python3.9[67483]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:45 compute-1 systemd[1]: Reloading.
Oct  9 09:43:45 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:45 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:45 compute-1 python3.9[67673]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:46 compute-1 python3.9[67751]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:46 compute-1 python3.9[67903]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:46.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:46 compute-1 python3.9[67981]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:47 compute-1 python3.9[68133]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:47 compute-1 systemd[1]: Reloading.
Oct  9 09:43:47 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:47 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:47 compute-1 systemd[1]: Starting Create netns directory...
Oct  9 09:43:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:47.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:47 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:43:47 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:43:47 compute-1 systemd[1]: Finished Create netns directory.
Oct  9 09:43:48 compute-1 ovn_controller[62080]: 2025-10-09T09:43:48Z|00025|memory|INFO|16256 kB peak resident set size after 30.1 seconds
Oct  9 09:43:48 compute-1 ovn_controller[62080]: 2025-10-09T09:43:48Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct  9 09:43:48 compute-1 podman[68299]: 2025-10-09 09:43:48.281764771 +0000 UTC m=+0.066003149 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:43:48 compute-1 python3.9[68345]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:48 compute-1 python3.9[68503]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:48.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:49 compute-1 python3.9[68651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003028.5825226-960-219643761002661/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:49.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:50 compute-1 python3.9[68804]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:50 compute-1 python3.9[68956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:50.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:50 compute-1 python3.9[69079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003030.2172585-1035-132523593094209/.source.json _original_basename=.tvm77msd follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:51 compute-1 python3.9[69231]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:51.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:52.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:53 compute-1 python3.9[69659]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  9 09:43:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:53.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:53 compute-1 python3.9[69812]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:43:54 compute-1 python3.9[69964]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:43:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:54.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:43:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:55.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:43:56 compute-1 python3[70136]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:43:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:56.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:57.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:58.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:43:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:59.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:00.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:01.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:44:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:02.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:44:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:03.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:04 compute-1 podman[70147]: 2025-10-09 09:44:04.132091966 +0000 UTC m=+7.959278431 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:44:04 compute-1 podman[70251]: 2025-10-09 09:44:04.224390164 +0000 UTC m=+0.027858156 container create 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:44:04 compute-1 podman[70251]: 2025-10-09 09:44:04.211214581 +0000 UTC m=+0.014682604 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:44:04 compute-1 python3[70136]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:44:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:04.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:05 compute-1 python3.9[70432]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:44:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:05.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:06 compute-1 python3.9[70586]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:06 compute-1 python3.9[70662]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:44:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:06.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:06 compute-1 python3.9[70813]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003046.4698431-1299-122621289515411/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:07 compute-1 python3.9[70889]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:44:07 compute-1 systemd[1]: Reloading.
Oct  9 09:44:07 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:07 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:07.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:08 compute-1 python3.9[71001]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:08 compute-1 systemd[1]: Reloading.
Oct  9 09:44:08 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:08 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:08 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Oct  9 09:44:08 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:44:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cec53ae44fdafe8f7dd68392008e8f9d7af64c1680de645755463dd07383fe/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cec53ae44fdafe8f7dd68392008e8f9d7af64c1680de645755463dd07383fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:08 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75.
Oct  9 09:44:08 compute-1 podman[71042]: 2025-10-09 09:44:08.425279174 +0000 UTC m=+0.080152994 container init 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: + sudo -E kolla_set_configs
Oct  9 09:44:08 compute-1 podman[71042]: 2025-10-09 09:44:08.446121063 +0000 UTC m=+0.100994861 container start 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:44:08 compute-1 edpm-start-podman-container[71042]: ovn_metadata_agent
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Validating config file
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Copying service configuration files
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Writing out command to execute
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  9 09:44:08 compute-1 edpm-start-podman-container[71041]: Creating additional drop-in dependency for "ovn_metadata_agent" (5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75)
Oct  9 09:44:08 compute-1 podman[71061]: 2025-10-09 09:44:08.492337509 +0000 UTC m=+0.038912304 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: ++ cat /run_command
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: + CMD=neutron-ovn-metadata-agent
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: + ARGS=
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: + sudo kolla_copy_cacerts
Oct  9 09:44:08 compute-1 systemd[1]: Reloading.
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: + [[ ! -n '' ]]
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: + . kolla_extend_start
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: Running command: 'neutron-ovn-metadata-agent'
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: + umask 0022
Oct  9 09:44:08 compute-1 ovn_metadata_agent[71054]: + exec neutron-ovn-metadata-agent
Oct  9 09:44:08 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:08 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:08 compute-1 systemd[1]: Started ovn_metadata_agent container.
Oct  9 09:44:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:08.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:09 compute-1 systemd[1]: session-35.scope: Deactivated successfully.
Oct  9 09:44:09 compute-1 systemd[1]: session-35.scope: Consumed 40.304s CPU time.
Oct  9 09:44:09 compute-1 systemd-logind[798]: Session 35 logged out. Waiting for processes to exit.
Oct  9 09:44:09 compute-1 systemd-logind[798]: Removed session 35.
Oct  9 09:44:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:09.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.990 71059 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.990 71059 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.990 71059 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.991 71059 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.992 71059 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.993 71059 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.994 71059 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.995 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.996 71059 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.997 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.998 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:09 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:09.999 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.000 71059 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.001 71059 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.002 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.003 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.004 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.005 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.006 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.007 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.008 71059 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.009 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.010 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.011 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.012 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.013 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.014 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.015 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.016 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.017 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.018 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.019 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.020 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.021 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.022 71059 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.030 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.030 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.030 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.031 71059 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.031 71059 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.045 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 1479fb1d-afaa-427a-bdce-40294d3573d2 (UUID: 1479fb1d-afaa-427a-bdce-40294d3573d2) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.064 71059 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.064 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.064 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.064 71059 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.066 71059 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.071 71059 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.075 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '1479fb1d-afaa-427a-bdce-40294d3573d2'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], external_ids={}, name=1479fb1d-afaa-427a-bdce-40294d3573d2, nb_cfg_timestamp=1760003006163, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.076 71059 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fcc797b2f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.076 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.077 71059 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.077 71059 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.077 71059 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.080 71059 DEBUG oslo_service.service [-] Started child 71254 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.083 71059 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpzcuib4u3/privsep.sock']#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.083 71254 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-889826'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.105 71254 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.106 71254 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.106 71254 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.108 71254 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.113 71254 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.118 71254 INFO eventlet.wsgi.server [-] (71254) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  9 09:44:10 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.617 71059 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.617 71059 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpzcuib4u3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.527 71273 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.530 71273 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.533 71273 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.533 71273 INFO oslo.privsep.daemon [-] privsep daemon running as pid 71273#033[00m
Oct  9 09:44:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:10.619 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[a479a9b4-ee09-41e3-b706-b58d1a813be6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:44:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:10.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.019 71273 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.019 71273 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.019 71273 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.457 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[8d36c56d-d298-4aa2-b36e-f2e30154d0ae]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.459 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, column=external_ids, values=({'neutron:ovn-metadata-id': '71d73966-7ab7-5393-ba2c-b8eed7f232a8'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.467 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.472 71059 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.473 71059 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.474 71059 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.475 71059 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.476 71059 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.477 71059 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.478 71059 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.479 71059 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.480 71059 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.481 71059 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.482 71059 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.483 71059 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.484 71059 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.485 71059 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.486 71059 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.487 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.488 71059 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.489 71059 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.490 71059 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.491 71059 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.492 71059 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.493 71059 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.494 71059 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.495 71059 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.496 71059 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.497 71059 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.498 71059 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.499 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.500 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.501 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.502 71059 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:44:11.503 71059 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:44:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:11.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:12.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:13.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:14 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:14 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:14 compute-1 systemd-logind[798]: New session 36 of user zuul.
Oct  9 09:44:14 compute-1 systemd[1]: Started Session 36 of User zuul.
Oct  9 09:44:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:14.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:15 compute-1 python3.9[71458]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:44:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:15.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:16 compute-1 python3.9[71615]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:16.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:17 compute-1 python3.9[71776]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:44:17 compute-1 systemd[1]: Reloading.
Oct  9 09:44:17 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:17 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:17.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:18 compute-1 python3.9[71962]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:44:18 compute-1 network[71979]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:44:18 compute-1 network[71980]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:44:18 compute-1 network[71981]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:44:18 compute-1 podman[71987]: 2025-10-09 09:44:18.864285228 +0000 UTC m=+0.068218836 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 09:44:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:18.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:19.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:20.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:21.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:21 compute-1 python3.9[72271]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:22 compute-1 python3.9[72424]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:22.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:23 compute-1 python3.9[72577]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:23 compute-1 python3.9[72730]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:23.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:24 compute-1 python3.9[72884]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:24 compute-1 python3.9[73037]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:25 compute-1 python3.9[73190]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:25.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:26 compute-1 python3.9[73344]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:26 compute-1 python3.9[73496]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:26.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:27 compute-1 python3.9[73648]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:27 compute-1 python3.9[73800]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:27 compute-1 python3.9[73953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:28 compute-1 python3.9[74105]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:28 compute-1 python3.9[74257]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:28.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:29 compute-1 python3.9[74434]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:29.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:29 compute-1 python3.9[74587]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:30 compute-1 python3.9[74739]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:30 compute-1 python3.9[74891]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:30.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:31 compute-1 python3.9[75043]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:31 compute-1 python3.9[75196]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:31.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:32 compute-1 python3.9[75348]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:32 compute-1 python3.9[75500]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:33 compute-1 python3.9[75653]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:44:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:33.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:34 compute-1 python3.9[75805]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:44:34 compute-1 systemd[1]: Reloading.
Oct  9 09:44:34 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:34 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:35 compute-1 python3.9[75992]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:35 compute-1 python3.9[76146]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:35.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:35 compute-1 python3.9[76299]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:36 compute-1 python3.9[76452]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:36 compute-1 python3.9[76605]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:36.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:37 compute-1 python3.9[76758]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:37.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:37 compute-1 python3.9[76912]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:38 compute-1 podman[77037]: 2025-10-09 09:44:38.741192809 +0000 UTC m=+0.040938210 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  9 09:44:38 compute-1 python3.9[77081]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  9 09:44:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:38.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:39 compute-1 python3.9[77236]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 09:44:39 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:44:39 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:44:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:39.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:40 compute-1 python3.9[77395]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 09:44:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:41 compute-1 python3.9[77555]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:44:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:41.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:41 compute-1 python3.9[77640]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:44:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:42.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:43.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:44.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:46.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:47.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:48.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:49 compute-1 podman[77680]: 2025-10-09 09:44:49.389237997 +0000 UTC m=+0.060234368 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  9 09:44:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:44:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 21.18 MB, 0.04 MB/s#012Interval WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  9 09:44:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:51.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:52.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:54.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:44:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:55.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:44:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:57.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:44:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:59.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:45:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:01.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:45:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:01.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:03.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:03.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:45:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:05.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:45:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:45:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:05.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:45:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:07.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:45:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:07.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:45:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:09.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:09 compute-1 podman[77740]: 2025-10-09 09:45:09.423757962 +0000 UTC m=+0.033515999 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:45:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:09.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:45:10.024 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:45:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:45:10.025 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:45:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:45:10.025 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:45:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:11.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:45:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:11.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:45:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:45:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:13.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:45:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:15.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:45:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:45:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2433 writes, 14K keys, 2433 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s#012Cumulative WAL: 2433 writes, 2433 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2433 writes, 14K keys, 2433 commit groups, 1.0 writes per commit group, ingest: 38.79 MB, 0.06 MB/s#012Interval WAL: 2433 writes, 2433 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    456.2      0.05              0.03         6    0.008       0      0       0.0       0.0#012  L6      1/0   11.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0    472.3    408.0      0.16              0.09         5    0.031     19K   2240       0.0       0.0#012 Sum      1/0   11.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    364.3    419.0      0.20              0.12        11    0.018     19K   2240       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    365.8    420.6      0.20              0.12        10    0.020     19K   2240       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    472.3    408.0      0.16              0.09         5    0.031     19K   2240       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    464.0      0.05              0.03         5    0.009       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.021#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e4b55c29b0#2 capacity: 304.00 MB usage: 2.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(169,2.09 MB,0.688272%) FilterBlock(11,66.42 KB,0.0213372%) IndexBlock(11,134.28 KB,0.0431362%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 09:45:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:15.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:17.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:19.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:19 compute-1 podman[78107]: 2025-10-09 09:45:19.571620708 +0000 UTC m=+0.083185523 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  9 09:45:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:19.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:21.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:23.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:25.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:27.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:27.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:29.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:29 compute-1 kernel: SELinux:  Converting 469 SID table entries...
Oct  9 09:45:29 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:45:29 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:45:29 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:45:29 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:45:29 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:45:29 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:45:29 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:45:29 compute-1 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Oct  9 09:45:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:45:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:29.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:45:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:31.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:45:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:31.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:45:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:33.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:45:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:33.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:45:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:35.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:35.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:36 compute-1 kernel: SELinux:  Converting 469 SID table entries...
Oct  9 09:45:36 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:45:36 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:45:36 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:45:36 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:45:36 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:45:36 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:45:36 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:45:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:45:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:37.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:45:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:45:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:37.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:45:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:39 compute-1 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Oct  9 09:45:39 compute-1 podman[78194]: 2025-10-09 09:45:39.531220575 +0000 UTC m=+0.038324409 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:45:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:45:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:45:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:41.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:41.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:43.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:43.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:45.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:45.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:47.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:45:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:45:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:49.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:45:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:49.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:45:50 compute-1 podman[83058]: 2025-10-09 09:45:50.546987997 +0000 UTC m=+0.057923089 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  9 09:45:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:51.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:53.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:45:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:53.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:45:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:55.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:55.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:57.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:57.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:45:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:45:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:46:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:01.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:46:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:01.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:46:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:03.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:03.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:05.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:05.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:07.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:07.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:09.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:09 compute-1 podman[95047]: 2025-10-09 09:46:09.594600806 +0000 UTC m=+0.037092129 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  9 09:46:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:46:10.025 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:46:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:46:10.025 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:46:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:46:10.025 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:46:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:11.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:11 compute-1 kernel: SELinux:  Converting 470 SID table entries...
Oct  9 09:46:11 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:46:11 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:46:11 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:46:11 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:46:11 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:46:11 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:46:11 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:46:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:11.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:12 compute-1 dbus-broker-launch[789]: Noticed file-system modification, trigger reload.
Oct  9 09:46:12 compute-1 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=5 res=1
Oct  9 09:46:12 compute-1 dbus-broker-launch[789]: Noticed file-system modification, trigger reload.
Oct  9 09:46:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:13.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:46:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:46:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:46:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:15.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:15.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:17.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:17 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Oct  9 09:46:17 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Oct  9 09:46:17 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Oct  9 09:46:17 compute-1 systemd[1]: sshd.service: Consumed 840ms CPU time, read 2.7M from disk, written 0B to disk.
Oct  9 09:46:17 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Oct  9 09:46:17 compute-1 systemd[1]: Stopping sshd-keygen.target...
Oct  9 09:46:17 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:46:17 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:46:17 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:46:17 compute-1 systemd[1]: Reached target sshd-keygen.target.
Oct  9 09:46:17 compute-1 systemd[1]: Starting OpenSSH server daemon...
Oct  9 09:46:17 compute-1 systemd[1]: Started OpenSSH server daemon.
Oct  9 09:46:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:17.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:18 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 09:46:18 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct  9 09:46:18 compute-1 systemd[1]: Reloading.
Oct  9 09:46:19 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:19 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:19.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:19 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 09:46:19 compute-1 systemd[1]: Starting PackageKit Daemon...
Oct  9 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:19 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:46:19 compute-1 systemd[1]: Started PackageKit Daemon.
Oct  9 09:46:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:19.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:20 compute-1 podman[98711]: 2025-10-09 09:46:20.666252451 +0000 UTC m=+0.094822214 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  9 09:46:20 compute-1 python3.9[98723]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:20 compute-1 systemd[1]: Reloading.
Oct  9 09:46:20 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:20 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:21.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:21 compute-1 python3.9[100268]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:21 compute-1 systemd[1]: Reloading.
Oct  9 09:46:21 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:21 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:21.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:22 compute-1 python3.9[101546]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:22 compute-1 systemd[1]: Reloading.
Oct  9 09:46:22 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:22 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:23 compute-1 python3.9[102866]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:23 compute-1 systemd[1]: Reloading.
Oct  9 09:46:23 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:23 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:23 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:23 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:46:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:23.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:46:24 compute-1 python3.9[105159]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:24 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 09:46:24 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct  9 09:46:24 compute-1 systemd[1]: man-db-cache-update.service: Consumed 7.108s CPU time.
Oct  9 09:46:24 compute-1 systemd[1]: run-r4dcd61a2936c4a27a00885585635cc71.service: Deactivated successfully.
Oct  9 09:46:24 compute-1 systemd[1]: Reloading.
Oct  9 09:46:24 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:24 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:25.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:25 compute-1 python3.9[105679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:25 compute-1 systemd[1]: Reloading.
Oct  9 09:46:25 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:25 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:25.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:26 compute-1 python3.9[105869]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:26 compute-1 systemd[1]: Reloading.
Oct  9 09:46:26 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:26 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:27.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:27 compute-1 python3.9[106059]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:28.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:28 compute-1 python3.9[106215]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:28 compute-1 systemd[1]: Reloading.
Oct  9 09:46:28 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:28 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:29.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:29 compute-1 python3.9[106405]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:29 compute-1 systemd[1]: Reloading.
Oct  9 09:46:29 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:29 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:29 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  9 09:46:29 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  9 09:46:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:30.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:30 compute-1 python3.9[106624]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:31 compute-1 python3.9[106779]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:31.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:31 compute-1 python3.9[106935]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:32.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:32 compute-1 python3.9[107090]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:32 compute-1 python3.9[107245]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:33.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:33 compute-1 python3.9[107400]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:33 compute-1 python3.9[107556]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:34.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:34 compute-1 python3.9[107711]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:35.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:35 compute-1 python3.9[107866]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:35 compute-1 python3.9[108022]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:46:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:36.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:46:36 compute-1 python3.9[108177]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:36 compute-1 python3.9[108332]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:37.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:37 compute-1 python3.9[108487]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:38.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:38 compute-1 python3.9[108643]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:46:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:39.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:46:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:46:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:40.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:46:40 compute-1 podman[108771]: 2025-10-09 09:46:40.486270058 +0000 UTC m=+0.041619735 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 09:46:40 compute-1 python3.9[108813]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:41 compute-1 python3.9[108967]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:41.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:41 compute-1 python3.9[109120]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:41 compute-1 python3.9[109272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:42.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:42 compute-1 python3.9[109424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:42 compute-1 python3.9[109576]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:43.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:43 compute-1 python3.9[109729]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:44.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:44 compute-1 python3.9[109854]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003203.2617877-1623-94484127265197/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:44 compute-1 python3.9[110006]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:46:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:46:45 compute-1 python3.9[110131]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003204.4043381-1623-247449231915489/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:45 compute-1 python3.9[110284]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:45 compute-1 python3.9[110409]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003205.254716-1623-102473465103626/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:46.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:46 compute-1 python3.9[110561]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:46 compute-1 python3.9[110686]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003206.1206498-1623-176114099442017/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:47.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:47 compute-1 python3.9[110838]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:47 compute-1 python3.9[110964]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003206.967985-1623-40843198265761/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:48.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:48 compute-1 python3.9[111116]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:48 compute-1 python3.9[111241]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003207.8430717-1623-272439944509838/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:49 compute-1 python3.9[111393]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:49.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:49 compute-1 python3.9[111516]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003208.6829345-1623-166609101794916/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:49 compute-1 python3.9[111669]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:50.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:50 compute-1 python3.9[111819]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003209.4848008-1623-114454026950569/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:50 compute-1 podman[111943]: 2025-10-09 09:46:50.879439718 +0000 UTC m=+0.052134596 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 09:46:51 compute-1 python3.9[111989]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  9 09:46:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:51.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:51 compute-1 python3.9[112150]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:51 compute-1 python3.9[112302]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:52.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:52 compute-1 python3.9[112454]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:52 compute-1 python3.9[112606]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:53 compute-1 python3.9[112758]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:53 compute-1 python3.9[112911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:54 compute-1 python3.9[113063]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:54 compute-1 python3.9[113215]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:54 compute-1 python3.9[113367]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:55.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:55 compute-1 python3.9[113519]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:55 compute-1 python3.9[113672]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:56.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:56 compute-1 python3.9[113824]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:56 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  9 09:46:56 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  9 09:46:56 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  9 09:46:56 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  9 09:46:56 compute-1 python3.9[113978]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:57 compute-1 python3.9[114131]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:57 compute-1 python3.9[114284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:58.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:58 compute-1 python3.9[114407]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003217.602828-2286-211289499365653/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:58 compute-1 python3.9[114559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:46:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:59.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:59 compute-1 python3.9[114682]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003218.4361467-2286-80334658557767/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:59 compute-1 python3.9[114835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:00 compute-1 python3.9[114958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003219.2670748-2286-149914489264776/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:00.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:00 compute-1 python3.9[115110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:00 compute-1 python3.9[115233]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003220.1324515-2286-223116848468069/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:01.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:01 compute-1 python3.9[115385]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:01 compute-1 python3.9[115509]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003220.9936302-2286-136501184197750/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:02.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:02 compute-1 python3.9[115661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:02 compute-1 python3.9[115784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003221.878107-2286-277151286130713/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:03.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:03 compute-1 python3.9[115936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:03 compute-1 python3.9[116060]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003222.8004577-2286-59029059534338/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:04 compute-1 python3.9[116212]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:04.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:04 compute-1 python3.9[116335]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003223.6758366-2286-147574103004957/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:04 compute-1 python3.9[116487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:05.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:05 compute-1 python3.9[116610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003224.557896-2286-44985817347735/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:05 compute-1 python3.9[116763]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:06.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:06 compute-1 python3.9[116886]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003225.431307-2286-192413514458065/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:06 compute-1 python3.9[117038]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:07.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:07 compute-1 python3.9[117161]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003226.3589654-2286-11005073057086/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:07 compute-1 python3.9[117314]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:08.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:08 compute-1 python3.9[117437]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003227.327268-2286-8652316216764/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:08 compute-1 python3.9[117589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:08 compute-1 python3.9[117712]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003228.193715-2286-89789867298836/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:09.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:09 compute-1 python3.9[117864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:09 compute-1 python3.9[117988]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003229.0156772-2286-272140018624589/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:47:10.026 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:47:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:47:10.026 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:47:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:47:10.026 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:47:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:10.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:10 compute-1 python3.9[118163]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:10 compute-1 podman[118290]: 2025-10-09 09:47:10.887276528 +0000 UTC m=+0.064951752 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  9 09:47:11 compute-1 python3.9[118334]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  9 09:47:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:11.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:47:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:47:12 compute-1 dbus-broker-launch[790]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 09:47:12 compute-1 python3.9[118491]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:12 compute-1 python3.9[118643]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:47:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:13.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:47:13 compute-1 python3.9[118795]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:13 compute-1 python3.9[118948]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:14.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:14 compute-1 python3.9[119100]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:14 compute-1 python3.9[119252]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:15 compute-1 python3.9[119404]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:15.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:15 compute-1 python3.9[119557]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:16.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:16 compute-1 python3.9[119709]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:16 compute-1 python3.9[119861]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:17 compute-1 python3.9[120013]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:17 compute-1 systemd[1]: Reloading.
Oct  9 09:47:17 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:17 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:17 compute-1 systemd[1]: Starting dnf makecache...
Oct  9 09:47:17 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Oct  9 09:47:17 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Oct  9 09:47:17 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  9 09:47:17 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  9 09:47:17 compute-1 systemd[1]: Starting libvirt logging daemon...
Oct  9 09:47:17 compute-1 systemd[1]: Started libvirt logging daemon.
Oct  9 09:47:17 compute-1 dnf[120050]: Metadata cache refreshed recently.
Oct  9 09:47:17 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  9 09:47:17 compute-1 systemd[1]: Finished dnf makecache.
Oct  9 09:47:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:18.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:18 compute-1 python3.9[120207]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:18 compute-1 systemd[1]: Reloading.
Oct  9 09:47:18 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:18 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:18 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  9 09:47:18 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  9 09:47:18 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  9 09:47:18 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  9 09:47:18 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  9 09:47:18 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  9 09:47:18 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Oct  9 09:47:18 compute-1 systemd[1]: Started libvirt nodedev daemon.
Oct  9 09:47:19 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  9 09:47:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:19.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:19 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  9 09:47:19 compute-1 python3.9[120423]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:19 compute-1 systemd[1]: Reloading.
Oct  9 09:47:19 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:19 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:19 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  9 09:47:19 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  9 09:47:19 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  9 09:47:19 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  9 09:47:19 compute-1 systemd[1]: Starting libvirt proxy daemon...
Oct  9 09:47:19 compute-1 systemd[1]: Started libvirt proxy daemon.
Oct  9 09:47:19 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  9 09:47:19 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  9 09:47:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:20.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:20 compute-1 python3.9[120642]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:20 compute-1 systemd[1]: Reloading.
Oct  9 09:47:20 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:20 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:20 compute-1 setroubleshoot[120422]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l efc5bd63-4429-4b01-9c17-474f112f439f
Oct  9 09:47:20 compute-1 setroubleshoot[120422]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  9 09:47:20 compute-1 setroubleshoot[120422]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l efc5bd63-4429-4b01-9c17-474f112f439f
Oct  9 09:47:20 compute-1 setroubleshoot[120422]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  9 09:47:20 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Oct  9 09:47:20 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  9 09:47:20 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  9 09:47:20 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  9 09:47:20 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  9 09:47:20 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  9 09:47:20 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  9 09:47:20 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  9 09:47:20 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  9 09:47:20 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  9 09:47:20 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Oct  9 09:47:20 compute-1 systemd[1]: Started libvirt QEMU daemon.
Oct  9 09:47:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:21 compute-1 python3.9[120856]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:21 compute-1 systemd[1]: Reloading.
Oct  9 09:47:21 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:21.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:21 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:21 compute-1 podman[120858]: 2025-10-09 09:47:21.168925831 +0000 UTC m=+0.072513290 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:47:21 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Oct  9 09:47:21 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Oct  9 09:47:21 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  9 09:47:21 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  9 09:47:21 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  9 09:47:21 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  9 09:47:21 compute-1 systemd[1]: Starting libvirt secret daemon...
Oct  9 09:47:21 compute-1 systemd[1]: Started libvirt secret daemon.
Oct  9 09:47:21 compute-1 python3.9[121090]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:22.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:22 compute-1 python3.9[121242]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:47:22 compute-1 auditd[730]: Audit daemon rotating log files
Oct  9 09:47:22 compute-1 python3.9[121394]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:23.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:23 compute-1 python3.9[121613]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:47:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:47:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:24.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:24 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:47:24 compute-1 python3.9[121777]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:24 compute-1 python3.9[121898]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003243.9599235-3360-786994992123/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c150843fcb80d0d0a9968a12abeb036b918e43ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:25.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:25 compute-1 python3.9[122050]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 286f8bf0-da72-5823-9a4e-ac4457d9e609#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:25 compute-1 python3.9[122213]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:26.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:27.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:27 compute-1 python3.9[122702]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:47:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:28.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:47:28 compute-1 python3.9[122854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:28 compute-1 python3.9[122977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003247.7633202-3525-72180035016196/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:29.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:29 compute-1 python3.9[123129]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:30 compute-1 python3.9[123282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:30.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:30 compute-1 python3.9[123385]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:30 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  9 09:47:30 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  9 09:47:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:30 compute-1 python3.9[123537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:31.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:31 compute-1 python3.9[123615]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.z10upoo7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:31 compute-1 python3.9[123768]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:32.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:32 compute-1 python3.9[123846]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:32 compute-1 python3.9[123998]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:33.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:33 compute-1 python3[124151]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 09:47:33 compute-1 python3.9[124304]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:34.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:34 compute-1 python3.9[124382]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:34 compute-1 python3.9[124534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:35 compute-1 python3.9[124612]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:35.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:35 compute-1 python3.9[124765]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:35 compute-1 python3.9[124843]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:36.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:36 compute-1 python3.9[124995]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:36 compute-1 python3.9[125073]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:37.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:37 compute-1 python3.9[125225]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:37 compute-1 python3.9[125351]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003256.994507-3900-42259446352375/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:47:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:38.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:47:38 compute-1 python3.9[125503]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:38 compute-1 python3.9[125655]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:39.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:39 compute-1 python3.9[125810]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:47:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:40.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:47:40 compute-1 python3.9[125963]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:40 compute-1 python3.9[126116]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:47:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:41 compute-1 podman[126242]: 2025-10-09 09:47:41.030226759 +0000 UTC m=+0.044737486 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  9 09:47:41 compute-1 python3.9[126284]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:41.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:41 compute-1 python3.9[126442]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:42.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:42 compute-1 python3.9[126594]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:42 compute-1 python3.9[126717]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003261.8802435-4116-60463342325944/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:43.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:43 compute-1 python3.9[126869]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:43 compute-1 python3.9[126993]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003262.874545-4161-32603034343188/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:44.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:44 compute-1 python3.9[127145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:44 compute-1 python3.9[127268]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003263.8493674-4206-1931520554495/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:47:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:47:45 compute-1 python3.9[127420]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:47:45 compute-1 systemd[1]: Reloading.
Oct  9 09:47:45 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:45 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:45 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Oct  9 09:47:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:47:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:46.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:47:46 compute-1 python3.9[127612]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  9 09:47:46 compute-1 systemd[1]: Reloading.
Oct  9 09:47:46 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:46 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:46 compute-1 systemd[1]: Reloading.
Oct  9 09:47:46 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:46 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:47.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:47 compute-1 systemd[1]: session-36.scope: Deactivated successfully.
Oct  9 09:47:47 compute-1 systemd[1]: session-36.scope: Consumed 2min 24.446s CPU time.
Oct  9 09:47:47 compute-1 systemd-logind[798]: Session 36 logged out. Waiting for processes to exit.
Oct  9 09:47:47 compute-1 systemd-logind[798]: Removed session 36.
Oct  9 09:47:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:48.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:49.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:50.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:51.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:51 compute-1 podman[127736]: 2025-10-09 09:47:51.549201522 +0000 UTC m=+0.060469136 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  9 09:47:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:52.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:52 compute-1 systemd-logind[798]: New session 37 of user zuul.
Oct  9 09:47:52 compute-1 systemd[1]: Started Session 37 of User zuul.
Oct  9 09:47:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:53.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:53 compute-1 python3.9[127912]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:47:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:54.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:54 compute-1 python3.9[128069]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:55 compute-1 python3.9[128221]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.002000021s ======
Oct  9 09:47:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:55.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct  9 09:47:55 compute-1 python3.9[128374]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:56 compute-1 python3.9[128526]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:47:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:47:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:56.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:47:56 compute-1 python3.9[128678]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:47:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:57.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:47:57 compute-1 python3.9[128831]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:47:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:58.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:58 compute-1 python3.9[128985]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:47:58 compute-1 systemd[1]: Reloading.
Oct  9 09:47:58 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:58 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:47:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:59.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:59 compute-1 python3.9[129174]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:47:59 compute-1 network[129192]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:47:59 compute-1 network[129193]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:47:59 compute-1 network[129194]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:48:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:00.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:01.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:48:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:48:02 compute-1 python3.9[129469]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:02 compute-1 systemd[1]: Reloading.
Oct  9 09:48:02 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:02 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:03 compute-1 python3.9[129656]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:03.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:03 compute-1 python3.9[129809]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  9 09:48:04 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:48:04 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:48:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:04.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:05.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:06.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:06 compute-1 podman[129819]: 2025-10-09 09:48:06.48051894 +0000 UTC m=+2.523842031 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.574704994 +0000 UTC m=+0.028165151 container create 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.5997] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct  9 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered blocking state
Oct  9 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct  9 09:48:06 compute-1 kernel: veth0: entered allmulticast mode
Oct  9 09:48:06 compute-1 kernel: veth0: entered promiscuous mode
Oct  9 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered blocking state
Oct  9 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered forwarding state
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6108] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6124] device (veth0): carrier: link connected
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6125] device (podman0): carrier: link connected
Oct  9 09:48:06 compute-1 systemd-udevd[129897]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:48:06 compute-1 systemd-udevd[129894]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:48:06 compute-1 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  9 09:48:06 compute-1 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6420] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6425] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6429] device (podman0): Activation: starting connection 'podman0' (067949ec-2c39-4a84-9a73-11234d5a389d)
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6430] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6431] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6432] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6434] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.562215549 +0000 UTC m=+0.015675706 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:06 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6635] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6638] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.6643] device (podman0): Activation: successful, device activated.
Oct  9 09:48:06 compute-1 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  9 09:48:06 compute-1 systemd[1]: Started libpod-conmon-314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159.scope.
Oct  9 09:48:06 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.833009994 +0000 UTC m=+0.286470172 container init 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.838148786 +0000 UTC m=+0.291608933 container start 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.839504122 +0000 UTC m=+0.292964279 container attach 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:06 compute-1 iscsid_config[130018]: iqn.1994-05.com.redhat:ef5dd0d75ccc#015
Oct  9 09:48:06 compute-1 systemd[1]: libpod-314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159.scope: Deactivated successfully.
Oct  9 09:48:06 compute-1 podman[129867]: 2025-10-09 09:48:06.841276285 +0000 UTC m=+0.294736443 container died 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct  9 09:48:06 compute-1 kernel: veth0 (unregistering): left allmulticast mode
Oct  9 09:48:06 compute-1 kernel: veth0 (unregistering): left promiscuous mode
Oct  9 09:48:06 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct  9 09:48:06 compute-1 NetworkManager[982]: <info>  [1760003286.8778] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:48:07 compute-1 systemd[1]: run-netns-netns\x2dfb309e41\x2dcd9d\x2de926\x2d2704\x2d519c9dc048d5.mount: Deactivated successfully.
Oct  9 09:48:07 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159-userdata-shm.mount: Deactivated successfully.
Oct  9 09:48:07 compute-1 podman[129867]: 2025-10-09 09:48:07.1361214 +0000 UTC m=+0.589581557 container remove 314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  9 09:48:07 compute-1 python3.9[129809]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct  9 09:48:07 compute-1 systemd[1]: libpod-conmon-314ccc89f51d0767d16715cb1d956d66b0eee839c3ef8a4f10c5d2bcf5dd2159.scope: Deactivated successfully.
Oct  9 09:48:07 compute-1 python3.9[129809]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  9 09:48:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:07.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:07 compute-1 systemd[1]: var-lib-containers-storage-overlay-d326a3a5531b39b0223e7ba13637b2c394d3ee4c081ebe0095898470adf76f4d-merged.mount: Deactivated successfully.
Oct  9 09:48:07 compute-1 python3.9[130254]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:08.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:08 compute-1 python3.9[130377]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003287.3659604-318-280030819220850/.source.iscsi _original_basename=.ab755stf follow=False checksum=e75d4b19d897bf62fe4bce81ee6c77032a8ac0d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:08 compute-1 python3.9[130529]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:09.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:09 compute-1 python3.9[130680]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:48:10.027 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:48:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:48:10.027 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:48:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:48:10.027 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:48:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:10.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:10 compute-1 python3.9[130859]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:10 compute-1 python3.9[131011]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:11.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:11 compute-1 podman[131135]: 2025-10-09 09:48:11.232299562 +0000 UTC m=+0.039091909 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  9 09:48:11 compute-1 python3.9[131179]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:11 compute-1 python3.9[131258]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:12.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:12 compute-1 python3.9[131410]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:12 compute-1 python3.9[131488]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:13 compute-1 python3.9[131640]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:48:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:13.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:48:13 compute-1 python3.9[131793]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:14.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:14 compute-1 python3.9[131871]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:14 compute-1 python3.9[132023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:14 compute-1 python3.9[132101]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:48:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:15.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:48:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:15 compute-1 python3.9[132254]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:15 compute-1 systemd[1]: Reloading.
Oct  9 09:48:15 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:15 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:16.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:16 compute-1 python3.9[132443]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:16 compute-1 python3.9[132521]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:16 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 09:48:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:17.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:17 compute-1 python3.9[132673]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:17 compute-1 python3.9[132752]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:18.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:18 compute-1 python3.9[132904]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:18 compute-1 systemd[1]: Reloading.
Oct  9 09:48:18 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:18 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:18 compute-1 systemd[1]: Starting Create netns directory...
Oct  9 09:48:18 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:48:18 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:48:18 compute-1 systemd[1]: Finished Create netns directory.
Oct  9 09:48:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:48:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:19.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:48:19 compute-1 python3.9[133097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:19 compute-1 python3.9[133250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:20.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:20 compute-1 python3.9[133373]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003299.5134838-780-276881531271549/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:21 compute-1 python3.9[133525]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:21.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:21 compute-1 python3.9[133678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:21 compute-1 podman[133773]: 2025-10-09 09:48:21.821129742 +0000 UTC m=+0.052841299 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  9 09:48:21 compute-1 python3.9[133821]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003301.226468-855-141665857840063/.source.json _original_basename=.p6769hyl follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:22.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:22 compute-1 python3.9[133977]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:23.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:48:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:24.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:48:24 compute-1 python3.9[134405]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  9 09:48:24 compute-1 python3.9[134557]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:48:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:48:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:25.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:48:25 compute-1 python3.9[134710]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:48:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:27 compute-1 python3[134932]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:48:27 compute-1 podman[135011]: 2025-10-09 09:48:27.528960513 +0000 UTC m=+0.042501624 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:48:27 compute-1 podman[135028]: 2025-10-09 09:48:27.536960701 +0000 UTC m=+0.029074082 container create dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  9 09:48:27 compute-1 podman[135028]: 2025-10-09 09:48:27.523699754 +0000 UTC m=+0.015813155 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:27 compute-1 python3[134932]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:27 compute-1 podman[135066]: 2025-10-09 09:48:27.661763227 +0000 UTC m=+0.047271415 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:48:27 compute-1 podman[135011]: 2025-10-09 09:48:27.665806558 +0000 UTC m=+0.179347670 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:48:27 compute-1 podman[135241]: 2025-10-09 09:48:27.96732198 +0000 UTC m=+0.036361465 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:48:28 compute-1 podman[135286]: 2025-10-09 09:48:28.028825334 +0000 UTC m=+0.049003264 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:48:28 compute-1 podman[135241]: 2025-10-09 09:48:28.031179727 +0000 UTC m=+0.100219202 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:48:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:48:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:28.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:48:28 compute-1 python3.9[135323]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:28 compute-1 podman[135432]: 2025-10-09 09:48:28.365980288 +0000 UTC m=+0.034568462 container exec 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:48:28 compute-1 podman[135432]: 2025-10-09 09:48:28.376875349 +0000 UTC m=+0.045463524 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:48:28 compute-1 podman[135483]: 2025-10-09 09:48:28.511039084 +0000 UTC m=+0.034715270 container exec 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, name=keepalived, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4)
Oct  9 09:48:28 compute-1 podman[135483]: 2025-10-09 09:48:28.520891959 +0000 UTC m=+0.044568124 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, release=1793)
Oct  9 09:48:28 compute-1 python3.9[135688]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:29 compute-1 python3.9[135793]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:48:29 compute-1 python3.9[135945]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003309.3536878-1119-277808438221150/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:30 compute-1 python3.9[136021]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:48:30 compute-1 systemd[1]: Reloading.
Oct  9 09:48:30 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:30 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:30 compute-1 ceph-mon[9795]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct  9 09:48:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:30 compute-1 python3.9[136157]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:30 compute-1 systemd[1]: Reloading.
Oct  9 09:48:31 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:31 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:31 compute-1 systemd[1]: Starting iscsid container...
Oct  9 09:48:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:31 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:48:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171c25a90a02b6960144863e473dc8c4aba64cc99d2a8c52edc8a42b57737968/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171c25a90a02b6960144863e473dc8c4aba64cc99d2a8c52edc8a42b57737968/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/171c25a90a02b6960144863e473dc8c4aba64cc99d2a8c52edc8a42b57737968/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa.
Oct  9 09:48:31 compute-1 podman[136197]: 2025-10-09 09:48:31.336118059 +0000 UTC m=+0.078977003 container init dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:48:31 compute-1 iscsid[136209]: + sudo -E kolla_set_configs
Oct  9 09:48:31 compute-1 podman[136197]: 2025-10-09 09:48:31.358595409 +0000 UTC m=+0.101454334 container start dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:48:31 compute-1 podman[136197]: iscsid
Oct  9 09:48:31 compute-1 systemd[1]: Created slice User Slice of UID 0.
Oct  9 09:48:31 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  9 09:48:31 compute-1 systemd[1]: Started iscsid container.
Oct  9 09:48:31 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  9 09:48:31 compute-1 systemd[1]: Starting User Manager for UID 0...
Oct  9 09:48:31 compute-1 podman[136216]: 2025-10-09 09:48:31.437592559 +0000 UTC m=+0.071926606 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 09:48:31 compute-1 systemd[1]: dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa-5d5396f2cc627b69.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:48:31 compute-1 systemd[1]: dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa-5d5396f2cc627b69.service: Failed with result 'exit-code'.
Oct  9 09:48:31 compute-1 systemd[136228]: Queued start job for default target Main User Target.
Oct  9 09:48:31 compute-1 systemd[136228]: Created slice User Application Slice.
Oct  9 09:48:31 compute-1 systemd[136228]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  9 09:48:31 compute-1 systemd[136228]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:48:31 compute-1 systemd[136228]: Reached target Paths.
Oct  9 09:48:31 compute-1 systemd[136228]: Reached target Timers.
Oct  9 09:48:31 compute-1 systemd[136228]: Starting D-Bus User Message Bus Socket...
Oct  9 09:48:31 compute-1 systemd[136228]: Starting Create User's Volatile Files and Directories...
Oct  9 09:48:31 compute-1 systemd[136228]: Finished Create User's Volatile Files and Directories.
Oct  9 09:48:31 compute-1 systemd[136228]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:48:31 compute-1 systemd[136228]: Reached target Sockets.
Oct  9 09:48:31 compute-1 systemd[136228]: Reached target Basic System.
Oct  9 09:48:31 compute-1 systemd[1]: Started User Manager for UID 0.
Oct  9 09:48:31 compute-1 systemd[136228]: Reached target Main User Target.
Oct  9 09:48:31 compute-1 systemd[136228]: Startup finished in 95ms.
Oct  9 09:48:31 compute-1 systemd[1]: Started Session c3 of User root.
Oct  9 09:48:31 compute-1 iscsid[136209]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:48:31 compute-1 iscsid[136209]: INFO:__main__:Validating config file
Oct  9 09:48:31 compute-1 iscsid[136209]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:48:31 compute-1 iscsid[136209]: INFO:__main__:Writing out command to execute
Oct  9 09:48:31 compute-1 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  9 09:48:31 compute-1 iscsid[136209]: ++ cat /run_command
Oct  9 09:48:31 compute-1 iscsid[136209]: + CMD='/usr/sbin/iscsid -f'
Oct  9 09:48:31 compute-1 iscsid[136209]: + ARGS=
Oct  9 09:48:31 compute-1 iscsid[136209]: + sudo kolla_copy_cacerts
Oct  9 09:48:31 compute-1 systemd[1]: Started Session c4 of User root.
Oct  9 09:48:31 compute-1 iscsid[136209]: + [[ ! -n '' ]]
Oct  9 09:48:31 compute-1 iscsid[136209]: + . kolla_extend_start
Oct  9 09:48:31 compute-1 iscsid[136209]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  9 09:48:31 compute-1 iscsid[136209]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  9 09:48:31 compute-1 iscsid[136209]: Running command: '/usr/sbin/iscsid -f'
Oct  9 09:48:31 compute-1 iscsid[136209]: + umask 0022
Oct  9 09:48:31 compute-1 iscsid[136209]: + exec /usr/sbin/iscsid -f
Oct  9 09:48:31 compute-1 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  9 09:48:31 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Oct  9 09:48:31 compute-1 python3.9[136411]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:32.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:32 compute-1 python3.9[136563]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:48:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:33.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:48:33 compute-1 python3.9[136740]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:48:33 compute-1 network[136757]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:48:33 compute-1 network[136758]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:48:33 compute-1 network[136759]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:48:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:48:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:34.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:48:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:35.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:48:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:48:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:37.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:38 compute-1 python3.9[137037]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:48:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:38.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:38 compute-1 python3.9[137189]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  9 09:48:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:39.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:39 compute-1 python3.9[137345]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:39 compute-1 python3.9[137469]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003319.0870924-1341-34865914720122/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:40.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:40 compute-1 python3.9[137621]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:41 compute-1 python3.9[137773]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:48:41 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  9 09:48:41 compute-1 systemd[1]: Stopped Load Kernel Modules.
Oct  9 09:48:41 compute-1 systemd[1]: Stopping Load Kernel Modules...
Oct  9 09:48:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:41.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:41 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct  9 09:48:41 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct  9 09:48:41 compute-1 podman[137775]: 2025-10-09 09:48:41.302336338 +0000 UTC m=+0.042253576 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 09:48:41 compute-1 systemd[1]: Stopping User Manager for UID 0...
Oct  9 09:48:41 compute-1 systemd[136228]: Activating special unit Exit the Session...
Oct  9 09:48:41 compute-1 systemd[136228]: Stopped target Main User Target.
Oct  9 09:48:41 compute-1 systemd[136228]: Stopped target Basic System.
Oct  9 09:48:41 compute-1 systemd[136228]: Stopped target Paths.
Oct  9 09:48:41 compute-1 systemd[136228]: Stopped target Sockets.
Oct  9 09:48:41 compute-1 systemd[136228]: Stopped target Timers.
Oct  9 09:48:41 compute-1 systemd[136228]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:48:41 compute-1 systemd[136228]: Closed D-Bus User Message Bus Socket.
Oct  9 09:48:41 compute-1 systemd[136228]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:48:41 compute-1 systemd[136228]: Removed slice User Application Slice.
Oct  9 09:48:41 compute-1 systemd[136228]: Reached target Shutdown.
Oct  9 09:48:41 compute-1 systemd[136228]: Finished Exit the Session.
Oct  9 09:48:41 compute-1 systemd[136228]: Reached target Exit the Session.
Oct  9 09:48:41 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Oct  9 09:48:41 compute-1 systemd[1]: Stopped User Manager for UID 0.
Oct  9 09:48:41 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  9 09:48:41 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  9 09:48:41 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  9 09:48:41 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  9 09:48:41 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Oct  9 09:48:41 compute-1 python3.9[137947]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:42.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:42 compute-1 python3.9[138099]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:43 compute-1 python3.9[138251]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:48:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:43.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:48:43 compute-1 python3.9[138404]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:48:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:44.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:48:44 compute-1 python3.9[138527]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003323.429502-1515-100436937641012/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:44 compute-1 python3.9[138679]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:48:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:48:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:45.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:48:45 compute-1 python3.9[138832]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:46 compute-1 python3.9[138985]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:48:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:46.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:48:46 compute-1 python3.9[139137]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:47 compute-1 python3.9[139289]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:47.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:47 compute-1 python3.9[139442]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:48.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:48 compute-1 python3.9[139594]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:48 compute-1 python3.9[139746]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:49 compute-1 python3.9[139898]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:49.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:49 compute-1 python3.9[140053]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:50.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:50 compute-1 python3.9[140230]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:50 compute-1 python3.9[140382]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:51 compute-1 python3.9[140460]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:51.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:51 compute-1 python3.9[140613]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:51 compute-1 podman[140663]: 2025-10-09 09:48:51.924260552 +0000 UTC m=+0.057962823 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:48:52 compute-1 python3.9[140707]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:52.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:52 compute-1 python3.9[140866]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:53 compute-1 python3.9[141018]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:53.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:53 compute-1 python3.9[141096]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:54 compute-1 python3.9[141249]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:54.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:54 compute-1 python3.9[141327]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:54 compute-1 python3.9[141479]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:55 compute-1 systemd[1]: Reloading.
Oct  9 09:48:55 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:55 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:55.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:55 compute-1 python3.9[141669]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:56 compute-1 python3.9[141747]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:56.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:56 compute-1 python3.9[141899]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:56 compute-1 python3.9[141977]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:48:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:57.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:48:57 compute-1 python3.9[142130]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:57 compute-1 systemd[1]: Reloading.
Oct  9 09:48:57 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:57 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:57 compute-1 systemd[1]: Starting Create netns directory...
Oct  9 09:48:57 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:48:57 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:48:57 compute-1 systemd[1]: Finished Create netns directory.
Oct  9 09:48:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:58.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:58 compute-1 python3.9[142322]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:59 compute-1 python3.9[142474]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:48:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:59.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:59 compute-1 python3.9[142598]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003338.8790877-2136-68528448025150/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:00.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:00 compute-1 python3.9[142750]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:00 compute-1 python3.9[142902]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:49:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:49:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:01.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:49:01 compute-1 python3.9[143025]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003340.6062481-2211-41332818072869/.source.json _original_basename=.08ee7krp follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:01 compute-1 podman[143051]: 2025-10-09 09:49:01.575525104 +0000 UTC m=+0.080079012 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:49:01 compute-1 python3.9[143195]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:02.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:03.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:03 compute-1 python3.9[143623]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  9 09:49:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:49:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:04.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:49:04 compute-1 python3.9[143775]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:49:05 compute-1 python3.9[143927]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:49:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:05.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:49:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:06.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:49:06 compute-1 python3[144099]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:49:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:07.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:08.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:08 compute-1 podman[144110]: 2025-10-09 09:49:08.487763739 +0000 UTC m=+1.716617208 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  9 09:49:08 compute-1 podman[144157]: 2025-10-09 09:49:08.595381422 +0000 UTC m=+0.030609018 container create a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  9 09:49:08 compute-1 podman[144157]: 2025-10-09 09:49:08.580227232 +0000 UTC m=+0.015454848 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  9 09:49:08 compute-1 python3[144099]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  9 09:49:09 compute-1 python3.9[144337]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:49:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:09.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:09 compute-1 python3.9[144492]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:49:10.027 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:49:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:49:10.028 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:49:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:49:10.028 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:49:10 compute-1 python3.9[144568]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:49:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:10.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:10 compute-1 python3.9[144744]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003350.2179031-2475-38899566157071/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:11 compute-1 python3.9[144820]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:49:11 compute-1 systemd[1]: Reloading.
Oct  9 09:49:11 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:11 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:11.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:11 compute-1 podman[144857]: 2025-10-09 09:49:11.4963958 +0000 UTC m=+0.070000621 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  9 09:49:11 compute-1 python3.9[144949]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:11 compute-1 systemd[1]: Reloading.
Oct  9 09:49:11 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:11 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:12.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:12 compute-1 systemd[1]: Starting multipathd container...
Oct  9 09:49:12 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:49:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67319b7ba853489351ccc2ccf1b3c8312cbe2d1df35a7789e99dfd6a762a89fc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67319b7ba853489351ccc2ccf1b3c8312cbe2d1df35a7789e99dfd6a762a89fc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:12 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a.
Oct  9 09:49:12 compute-1 podman[144989]: 2025-10-09 09:49:12.25967147 +0000 UTC m=+0.079296976 container init a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 09:49:12 compute-1 multipathd[145001]: + sudo -E kolla_set_configs
Oct  9 09:49:12 compute-1 podman[144989]: 2025-10-09 09:49:12.278474005 +0000 UTC m=+0.098099501 container start a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:49:12 compute-1 podman[144989]: multipathd
Oct  9 09:49:12 compute-1 systemd[1]: Started multipathd container.
Oct  9 09:49:12 compute-1 multipathd[145001]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:49:12 compute-1 multipathd[145001]: INFO:__main__:Validating config file
Oct  9 09:49:12 compute-1 multipathd[145001]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:49:12 compute-1 multipathd[145001]: INFO:__main__:Writing out command to execute
Oct  9 09:49:12 compute-1 multipathd[145001]: ++ cat /run_command
Oct  9 09:49:12 compute-1 multipathd[145001]: + CMD='/usr/sbin/multipathd -d'
Oct  9 09:49:12 compute-1 multipathd[145001]: + ARGS=
Oct  9 09:49:12 compute-1 multipathd[145001]: + sudo kolla_copy_cacerts
Oct  9 09:49:12 compute-1 podman[145008]: 2025-10-09 09:49:12.337200337 +0000 UTC m=+0.050580962 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  9 09:49:12 compute-1 systemd[1]: a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-546f817f7ddc15e5.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:49:12 compute-1 systemd[1]: a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-546f817f7ddc15e5.service: Failed with result 'exit-code'.
Oct  9 09:49:12 compute-1 multipathd[145001]: + [[ ! -n '' ]]
Oct  9 09:49:12 compute-1 multipathd[145001]: + . kolla_extend_start
Oct  9 09:49:12 compute-1 multipathd[145001]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  9 09:49:12 compute-1 multipathd[145001]: Running command: '/usr/sbin/multipathd -d'
Oct  9 09:49:12 compute-1 multipathd[145001]: + umask 0022
Oct  9 09:49:12 compute-1 multipathd[145001]: + exec /usr/sbin/multipathd -d
Oct  9 09:49:12 compute-1 multipathd[145001]: 1035.969048 | --------start up--------
Oct  9 09:49:12 compute-1 multipathd[145001]: 1035.969204 | read /etc/multipath.conf
Oct  9 09:49:12 compute-1 multipathd[145001]: 1035.973615 | path checkers start up
Oct  9 09:49:12 compute-1 python3.9[145187]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:49:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:49:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:13.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:49:13 compute-1 python3.9[145341]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:13 compute-1 python3.9[145504]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:49:14 compute-1 systemd[1]: Stopping multipathd container...
Oct  9 09:49:14 compute-1 multipathd[145001]: 1037.649719 | exit (signal)
Oct  9 09:49:14 compute-1 multipathd[145001]: 1037.649750 | --------shut down-------
Oct  9 09:49:14 compute-1 systemd[1]: libpod-a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a.scope: Deactivated successfully.
Oct  9 09:49:14 compute-1 podman[145508]: 2025-10-09 09:49:14.074279721 +0000 UTC m=+0.058502051 container died a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:49:14 compute-1 systemd[1]: a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-546f817f7ddc15e5.timer: Deactivated successfully.
Oct  9 09:49:14 compute-1 systemd[1]: Stopped /usr/bin/podman healthcheck run a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a.
Oct  9 09:49:14 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-userdata-shm.mount: Deactivated successfully.
Oct  9 09:49:14 compute-1 systemd[1]: var-lib-containers-storage-overlay-67319b7ba853489351ccc2ccf1b3c8312cbe2d1df35a7789e99dfd6a762a89fc-merged.mount: Deactivated successfully.
Oct  9 09:49:14 compute-1 podman[145508]: 2025-10-09 09:49:14.137622957 +0000 UTC m=+0.121845277 container cleanup a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:49:14 compute-1 podman[145508]: multipathd
Oct  9 09:49:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:14.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:14 compute-1 podman[145539]: multipathd
Oct  9 09:49:14 compute-1 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  9 09:49:14 compute-1 systemd[1]: Stopped multipathd container.
Oct  9 09:49:14 compute-1 systemd[1]: Starting multipathd container...
Oct  9 09:49:14 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:49:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67319b7ba853489351ccc2ccf1b3c8312cbe2d1df35a7789e99dfd6a762a89fc/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:14 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67319b7ba853489351ccc2ccf1b3c8312cbe2d1df35a7789e99dfd6a762a89fc/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:14 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a.
Oct  9 09:49:14 compute-1 podman[145548]: 2025-10-09 09:49:14.285632501 +0000 UTC m=+0.078798515 container init a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=multipathd)
Oct  9 09:49:14 compute-1 multipathd[145560]: + sudo -E kolla_set_configs
Oct  9 09:49:14 compute-1 podman[145548]: 2025-10-09 09:49:14.305354924 +0000 UTC m=+0.098520918 container start a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:49:14 compute-1 podman[145548]: multipathd
Oct  9 09:49:14 compute-1 systemd[1]: Started multipathd container.
Oct  9 09:49:14 compute-1 multipathd[145560]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:49:14 compute-1 multipathd[145560]: INFO:__main__:Validating config file
Oct  9 09:49:14 compute-1 multipathd[145560]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:49:14 compute-1 multipathd[145560]: INFO:__main__:Writing out command to execute
Oct  9 09:49:14 compute-1 multipathd[145560]: ++ cat /run_command
Oct  9 09:49:14 compute-1 multipathd[145560]: + CMD='/usr/sbin/multipathd -d'
Oct  9 09:49:14 compute-1 multipathd[145560]: + ARGS=
Oct  9 09:49:14 compute-1 multipathd[145560]: + sudo kolla_copy_cacerts
Oct  9 09:49:14 compute-1 podman[145567]: 2025-10-09 09:49:14.360995181 +0000 UTC m=+0.046706398 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  9 09:49:14 compute-1 systemd[1]: a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-2767a13f34e46aee.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:49:14 compute-1 systemd[1]: a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a-2767a13f34e46aee.service: Failed with result 'exit-code'.
Oct  9 09:49:14 compute-1 multipathd[145560]: Running command: '/usr/sbin/multipathd -d'
Oct  9 09:49:14 compute-1 multipathd[145560]: + [[ ! -n '' ]]
Oct  9 09:49:14 compute-1 multipathd[145560]: + . kolla_extend_start
Oct  9 09:49:14 compute-1 multipathd[145560]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  9 09:49:14 compute-1 multipathd[145560]: + umask 0022
Oct  9 09:49:14 compute-1 multipathd[145560]: + exec /usr/sbin/multipathd -d
Oct  9 09:49:14 compute-1 multipathd[145560]: 1038.000426 | --------start up--------
Oct  9 09:49:14 compute-1 multipathd[145560]: 1038.000440 | read /etc/multipath.conf
Oct  9 09:49:14 compute-1 multipathd[145560]: 1038.004688 | path checkers start up
Oct  9 09:49:14 compute-1 python3.9[145748]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:15.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:15 compute-1 python3.9[145901]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:49:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:49:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:16.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:49:16 compute-1 python3.9[146053]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  9 09:49:16 compute-1 kernel: Key type psk registered
Oct  9 09:49:17 compute-1 python3.9[146215]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:49:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:17.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:17 compute-1 python3.9[146338]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003356.6353297-2715-104742063536187/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:18 compute-1 python3.9[146491]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:18.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:18 compute-1 python3.9[146643]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:49:18 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  9 09:49:18 compute-1 systemd[1]: Stopped Load Kernel Modules.
Oct  9 09:49:18 compute-1 systemd[1]: Stopping Load Kernel Modules...
Oct  9 09:49:18 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct  9 09:49:18 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  9 09:49:18 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct  9 09:49:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:19.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:19 compute-1 python3.9[146800]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:49:19 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  9 09:49:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:20.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:20 compute-1 python3.9[146886]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:49:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:22.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:22 compute-1 podman[146889]: 2025-10-09 09:49:22.56333579 +0000 UTC m=+0.073954324 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  9 09:49:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:49:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:23.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:49:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:24.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:25.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:26.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:26 compute-1 systemd[1]: Reloading.
Oct  9 09:49:26 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:26 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:26 compute-1 systemd[1]: Reloading.
Oct  9 09:49:26 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:26 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:26 compute-1 systemd-logind[798]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 09:49:26 compute-1 systemd-logind[798]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 09:49:26 compute-1 lvm[147021]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:49:26 compute-1 lvm[147021]: VG ceph_vg0 finished
Oct  9 09:49:26 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 09:49:27 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct  9 09:49:27 compute-1 systemd[1]: Reloading.
Oct  9 09:49:27 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:27 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:27 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  9 09:49:27 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  9 09:49:27 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 09:49:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:27.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:28 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 09:49:28 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct  9 09:49:28 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.151s CPU time.
Oct  9 09:49:28 compute-1 systemd[1]: run-red74016144ef484ca6c5febbe69245e8.service: Deactivated successfully.
Oct  9 09:49:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:28.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:28 compute-1 python3.9[148364]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:28 compute-1 python3.9[148514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:49:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:29.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:29 compute-1 python3.9[148671]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:30 compute-1 python3.9[148848]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:49:30 compute-1 systemd[1]: Reloading.
Oct  9 09:49:30 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:30 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:31.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:31 compute-1 python3.9[149033]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:49:31 compute-1 network[149050]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:49:31 compute-1 network[149051]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:49:31 compute-1 network[149052]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:49:31 compute-1 podman[149057]: 2025-10-09 09:49:31.778209844 +0000 UTC m=+0.043544095 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  9 09:49:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:32.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:33.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:49:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:34.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:35.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:35 compute-1 python3.9[149427]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:36 compute-1 python3.9[149581]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:36.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:36 compute-1 python3.9[149734]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:37 compute-1 python3.9[149887]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:37.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:38 compute-1 python3.9[150066]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:38.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:38 compute-1 python3.9[150219]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:39 compute-1 python3.9[150372]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:39.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:39 compute-1 python3.9[150526]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:40.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:40 compute-1 python3.9[150679]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:41 compute-1 python3.9[150831]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:41 compute-1 podman[150956]: 2025-10-09 09:49:41.636512978 +0000 UTC m=+0.040702776 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  9 09:49:41 compute-1 python3.9[151001]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:42.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:42 compute-1 python3.9[151153]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:42 compute-1 python3.9[151305]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:43 compute-1 python3.9[151457]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:43.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:43 compute-1 python3.9[151610]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:44 compute-1 python3.9[151762]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:44.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:44 compute-1 podman[151867]: 2025-10-09 09:49:44.527368317 +0000 UTC m=+0.042093970 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:49:44 compute-1 python3.9[151932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:45 compute-1 python3.9[152084]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:45.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:45 compute-1 python3.9[152237]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:46 compute-1 python3.9[152389]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:46.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:46 compute-1 python3.9[152541]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:46 compute-1 python3.9[152693]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:47.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:47 compute-1 python3.9[152845]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:47 compute-1 python3.9[152998]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:48.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:48 compute-1 python3.9[153150]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:49 compute-1 python3.9[153302]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:49:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:49.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:50 compute-1 python3.9[153455]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:49:50 compute-1 systemd[1]: Reloading.
Oct  9 09:49:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:50.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:50 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:50 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:50 compute-1 python3.9[153666]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:51.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:51 compute-1 python3.9[153819]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:51 compute-1 python3.9[153973]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:52.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:52 compute-1 python3.9[154126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:52 compute-1 python3.9[154279]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:52 compute-1 podman[154281]: 2025-10-09 09:49:52.801803654 +0000 UTC m=+0.059580056 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 09:49:53 compute-1 python3.9[154455]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:53.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:53 compute-1 python3.9[154609]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:54 compute-1 python3.9[154762]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:49:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:55.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:49:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:55 compute-1 python3.9[154916]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:56.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:56 compute-1 python3.9[155068]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:56 compute-1 python3.9[155220]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:57 compute-1 python3.9[155373]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:57 compute-1 python3.9[155525]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:49:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:58.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:49:58 compute-1 python3.9[155677]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:58 compute-1 python3.9[155829]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:49:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:59.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:59 compute-1 python3.9[155981]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:59 compute-1 python3.9[156134]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:00 compute-1 systemd[1]: Starting system activity accounting tool...
Oct  9 09:50:00 compute-1 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 09:50:00 compute-1 systemd[1]: Finished system activity accounting tool.
Oct  9 09:50:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:50:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:00.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:50:00 compute-1 python3.9[156287]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:00 compute-1 ceph-mon[9795]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 09:50:00 compute-1 ceph-mon[9795]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct  9 09:50:00 compute-1 ceph-mon[9795]:    daemon nfs.cephfs.0.0.compute-1.douegr on compute-1 is in error state
Oct  9 09:50:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:00 compute-1 python3.9[156439]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:01.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:01 compute-1 python3.9[156591]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:02.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:02 compute-1 podman[156617]: 2025-10-09 09:50:02.528521808 +0000 UTC m=+0.039483285 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:50:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:50:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:03.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:50:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:04.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:05.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:06.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:06 compute-1 python3.9[156763]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  9 09:50:07 compute-1 python3.9[156916]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 09:50:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:07.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:07 compute-1 python3.9[157075]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 09:50:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:08.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:08 compute-1 systemd-logind[798]: New session 39 of user zuul.
Oct  9 09:50:08 compute-1 systemd[1]: Started Session 39 of User zuul.
Oct  9 09:50:08 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Oct  9 09:50:08 compute-1 systemd-logind[798]: Session 39 logged out. Waiting for processes to exit.
Oct  9 09:50:08 compute-1 systemd-logind[798]: Removed session 39.
Oct  9 09:50:09 compute-1 python3.9[157261]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:09.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:09 compute-1 python3.9[157383]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003408.9222152-4352-275798659640646/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:50:10.028 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:50:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:50:10.028 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:50:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:50:10.028 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:50:10 compute-1 python3.9[157533]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:50:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:10.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:50:10 compute-1 python3.9[157609]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:10 compute-1 python3.9[157784]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:11 compute-1 python3.9[157905]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003410.5647507-4352-264178552898551/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:11.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:11 compute-1 python3.9[158056]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:11 compute-1 podman[158151]: 2025-10-09 09:50:11.972290869 +0000 UTC m=+0.041069757 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  9 09:50:12 compute-1 python3.9[158187]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003411.3843381-4352-100882208068077/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:12.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:12 compute-1 python3.9[158343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:12 compute-1 python3.9[158464]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003412.2139869-4352-159821116365442/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:13.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:13 compute-1 python3.9[158617]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:14 compute-1 python3.9[158769]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:14.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:14 compute-1 python3.9[158921]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:15 compute-1 podman[159045]: 2025-10-09 09:50:15.0340596 +0000 UTC m=+0.041531589 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:50:15 compute-1 python3.9[159091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:15.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:15 compute-1 python3.9[159215]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760003414.8123224-4631-181590184628926/.source _original_basename=.zbeun2z3 follow=False checksum=eed7f96a5dd772c84aeba4a6fa2dfdaaf1ba521a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  9 09:50:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:16.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:16 compute-1 python3.9[159367]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:16 compute-1 python3.9[159519]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:17 compute-1 python3.9[159640]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003416.4754796-4709-16999923425064/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:17.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:17 compute-1 python3.9[159791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:18 compute-1 python3.9[159912]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003417.3753889-4754-264564526875956/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:18 compute-1 python3.9[160064]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  9 09:50:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:50:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:19.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:50:19 compute-1 python3.9[160216]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:50:20 compute-1 python3[160369]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:50:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:50:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:20.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:50:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:21.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:50:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:22.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:50:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:23.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:23 compute-1 podman[160403]: 2025-10-09 09:50:23.556549842 +0000 UTC m=+0.065186633 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct  9 09:50:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:24.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:26.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:27.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:28.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:50:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:29.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:50:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:30.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:50:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:31.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:50:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:33 compute-1 podman[160380]: 2025-10-09 09:50:33.061482611 +0000 UTC m=+12.814959599 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  9 09:50:33 compute-1 podman[160499]: 2025-10-09 09:50:33.159754373 +0000 UTC m=+0.031490389 container create 8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:50:33 compute-1 podman[160499]: 2025-10-09 09:50:33.14484948 +0000 UTC m=+0.016585515 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  9 09:50:33 compute-1 python3[160369]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  9 09:50:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:33.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:33 compute-1 podman[160604]: 2025-10-09 09:50:33.538480169 +0000 UTC m=+0.048617774 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:50:33 compute-1 python3.9[160696]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:34 compute-1 python3.9[160850]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  9 09:50:35 compute-1 python3.9[161002]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:50:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:35.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:36 compute-1 python3[161155]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:50:36 compute-1 podman[161183]: 2025-10-09 09:50:36.204754487 +0000 UTC m=+0.029323420 container create a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:50:36 compute-1 podman[161183]: 2025-10-09 09:50:36.191119699 +0000 UTC m=+0.015688642 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  9 09:50:36 compute-1 python3[161155]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct  9 09:50:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:36 compute-1 python3.9[161362]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:37.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:37 compute-1 python3.9[161564]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:37 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:50:37 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Oct  9 09:50:37 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:37.997624) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:50:37 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Oct  9 09:50:37 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003437997707, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4659, "num_deletes": 502, "total_data_size": 12778659, "memory_usage": 12956080, "flush_reason": "Manual Compaction"}
Oct  9 09:50:37 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438012047, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8291926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13169, "largest_seqno": 17823, "table_properties": {"data_size": 8274219, "index_size": 11961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36680, "raw_average_key_size": 19, "raw_value_size": 8237464, "raw_average_value_size": 4428, "num_data_blocks": 522, "num_entries": 1860, "num_filter_entries": 1860, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002995, "oldest_key_time": 1760002995, "file_creation_time": 1760003437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 14448 microseconds, and 10438 cpu microseconds.
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.012081) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8291926 bytes OK
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.012093) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.013608) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.013619) EVENT_LOG_v1 {"time_micros": 1760003438013616, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.013631) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12758120, prev total WAL file size 12758120, number of live WAL files 2.
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.015264) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8097KB)], [27(11MB)]
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438015287, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 19850120, "oldest_snapshot_seqno": -1}
Oct  9 09:50:38 compute-1 python3.9[161747]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003437.5312989-5030-249679790973499/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4995 keys, 15244332 bytes, temperature: kUnknown
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438058440, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15244332, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15206185, "index_size": 24533, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 124780, "raw_average_key_size": 24, "raw_value_size": 15110761, "raw_average_value_size": 3025, "num_data_blocks": 1034, "num_entries": 4995, "num_filter_entries": 4995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.058600) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15244332 bytes
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.060266) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 459.5 rd, 352.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.0 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(4.2) write-amplify(1.8) OK, records in: 6018, records dropped: 1023 output_compression: NoCompression
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.060279) EVENT_LOG_v1 {"time_micros": 1760003438060273, "job": 14, "event": "compaction_finished", "compaction_time_micros": 43204, "compaction_time_cpu_micros": 21214, "output_level": 6, "num_output_files": 1, "total_output_size": 15244332, "num_input_records": 6018, "num_output_records": 4995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438061478, "job": 14, "event": "table_file_deletion", "file_number": 29}
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438063033, "job": 14, "event": "table_file_deletion", "file_number": 27}
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.015218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.063054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.063056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.063058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.063058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:50:38.063059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:38.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:38 compute-1 python3.9[161823]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:50:38 compute-1 systemd[1]: Reloading.
Oct  9 09:50:38 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:50:38 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:50:39 compute-1 python3.9[161934]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:50:39 compute-1 systemd[1]: Reloading.
Oct  9 09:50:39 compute-1 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:50:39 compute-1 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:50:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:39.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:39 compute-1 systemd[1]: Starting nova_compute container...
Oct  9 09:50:39 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-1 podman[161975]: 2025-10-09 09:50:39.652987642 +0000 UTC m=+0.066963588 container init a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  9 09:50:39 compute-1 podman[161975]: 2025-10-09 09:50:39.658515149 +0000 UTC m=+0.072491085 container start a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  9 09:50:39 compute-1 podman[161975]: nova_compute
Oct  9 09:50:39 compute-1 nova_compute[161987]: + sudo -E kolla_set_configs
Oct  9 09:50:39 compute-1 systemd[1]: Started nova_compute container.
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Validating config file
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying service configuration files
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Deleting /etc/ceph
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Creating directory /etc/ceph
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/ceph
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Writing out command to execute
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:39 compute-1 nova_compute[161987]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:39 compute-1 nova_compute[161987]: ++ cat /run_command
Oct  9 09:50:39 compute-1 nova_compute[161987]: + CMD=nova-compute
Oct  9 09:50:39 compute-1 nova_compute[161987]: + ARGS=
Oct  9 09:50:39 compute-1 nova_compute[161987]: + sudo kolla_copy_cacerts
Oct  9 09:50:39 compute-1 nova_compute[161987]: + [[ ! -n '' ]]
Oct  9 09:50:39 compute-1 nova_compute[161987]: + . kolla_extend_start
Oct  9 09:50:39 compute-1 nova_compute[161987]: Running command: 'nova-compute'
Oct  9 09:50:39 compute-1 nova_compute[161987]: + echo 'Running command: '\''nova-compute'\'''
Oct  9 09:50:39 compute-1 nova_compute[161987]: + umask 0022
Oct  9 09:50:39 compute-1 nova_compute[161987]: + exec nova-compute
Oct  9 09:50:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:40 compute-1 python3.9[162149]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.382 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.382 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.382 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.382 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  9 09:50:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:50:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:41.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:50:41 compute-1 python3.9[162299]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.491 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:41 compute-1 nova_compute[161987]: 2025-10-09 09:50:41.501 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.035 2 INFO nova.virt.driver [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.117 2 INFO nova.compute.provider_config [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.128 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.129 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.129 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.129 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.129 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.129 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.130 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.131 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.132 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.133 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.134 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.135 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 python3.9[162479]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.136 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.137 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.138 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.139 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.140 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.141 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.142 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.143 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.144 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.145 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.146 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.147 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.148 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.149 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.150 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.151 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.152 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 WARNING oslo_config.cfg [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  9 09:50:42 compute-1 nova_compute[161987]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  9 09:50:42 compute-1 nova_compute[161987]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  9 09:50:42 compute-1 nova_compute[161987]: and ``live_migration_inbound_addr`` respectively.
Oct  9 09:50:42 compute-1 nova_compute[161987]: ).  Its value may be silently ignored in the future.#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rbd_secret_uuid        = 286f8bf0-da72-5823-9a4e-ac4457d9e609 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:42.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.259 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.272 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.273 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.273 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.273 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  9 09:50:42 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Oct  9 09:50:42 compute-1 systemd[1]: Started libvirt QEMU daemon.
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.335 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fc7f6eb76d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.337 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fc7f6eb76d0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.338 2 INFO nova.virt.libvirt.driver [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  9 09:50:42 compute-1 podman[162525]: 2025-10-09 09:50:42.347901481 +0000 UTC m=+0.041242262 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.349 2 WARNING nova.virt.libvirt.driver [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct  9 09:50:42 compute-1 nova_compute[161987]: 2025-10-09 09:50:42.349 2 DEBUG nova.virt.libvirt.volume.mount [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  9 09:50:42 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:42 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:42 compute-1 python3.9[162700]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.029 2 INFO nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Libvirt host capabilities <capabilities>
Oct  9 09:50:43 compute-1 nova_compute[161987]: 
Oct  9 09:50:43 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <host>
Oct  9 09:50:43 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <uuid>99ca1aa4-a8fe-49f8-8019-77dd20980206</uuid>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <cpu>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <arch>x86_64</arch>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <microcode version='167776725'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <signature family='25' model='1' stepping='1'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <maxphysaddr mode='emulate' bits='48'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='x2apic'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='tsc-deadline'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='osxsave'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='hypervisor'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='tsc_adjust'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='ospke'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='vaes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='vpclmulqdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='spec-ctrl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='stibp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='arch-capabilities'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='ssbd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='cmp_legacy'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='virt-ssbd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='lbrv'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='tsc-scale'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='vmcb-clean'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='pause-filter'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='pfthreshold'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='vgif'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='rdctl-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='mds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature name='pschange-mc-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <pages unit='KiB' size='4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <pages unit='KiB' size='2048'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <pages unit='KiB' size='1048576'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </cpu>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <power_management>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <suspend_mem/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </power_management>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <iommu support='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <migration_features>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <live/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <uri_transports>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <uri_transport>tcp</uri_transport>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <uri_transport>rdma</uri_transport>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </uri_transports>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </migration_features>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <topology>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <cells num='1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <cell id='0'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:          <memory unit='KiB'>7865152</memory>
Oct  9 09:50:43 compute-1 nova_compute[161987]:          <pages unit='KiB' size='4'>1966288</pages>
Oct  9 09:50:43 compute-1 nova_compute[161987]:          <pages unit='KiB' size='2048'>0</pages>
Oct  9 09:50:43 compute-1 nova_compute[161987]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  9 09:50:43 compute-1 nova_compute[161987]:          <distances>
Oct  9 09:50:43 compute-1 nova_compute[161987]:            <sibling id='0' value='10'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:          </distances>
Oct  9 09:50:43 compute-1 nova_compute[161987]:          <cpus num='4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:          </cpus>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        </cell>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </cells>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </topology>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <cache>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </cache>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <secmodel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model>selinux</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <doi>0</doi>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </secmodel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <secmodel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model>dac</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <doi>0</doi>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </secmodel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </host>
Oct  9 09:50:43 compute-1 nova_compute[161987]: 
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <guest>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <os_type>hvm</os_type>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <arch name='i686'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <wordsize>32</wordsize>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <domain type='qemu'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <domain type='kvm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </arch>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <features>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <pae/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <nonpae/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <apic default='on' toggle='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <cpuselection/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <deviceboot/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <externalSnapshot/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </features>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </guest>
Oct  9 09:50:43 compute-1 nova_compute[161987]: 
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <guest>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <os_type>hvm</os_type>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <arch name='x86_64'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <wordsize>64</wordsize>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <domain type='qemu'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <domain type='kvm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </arch>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <features>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <apic default='on' toggle='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <cpuselection/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <deviceboot/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <externalSnapshot/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </features>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </guest>
Oct  9 09:50:43 compute-1 nova_compute[161987]: 
Oct  9 09:50:43 compute-1 nova_compute[161987]: </capabilities>
Oct  9 09:50:43 compute-1 nova_compute[161987]: #033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.034 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.050 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  9 09:50:43 compute-1 nova_compute[161987]: <domainCapabilities>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <arch>i686</arch>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <vcpu max='4096'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <os supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <enum name='firmware'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <loader supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>rom</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pflash</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='readonly'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>yes</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>no</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='secure'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>no</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </loader>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </os>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <cpu>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>on</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>off</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>on</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>off</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='n270'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </cpu>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>file</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>anonymous</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>memfd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </memoryBacking>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <devices>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <disk supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>disk</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>cdrom</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>floppy</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>lun</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='bus'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>fdc</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>scsi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>sata</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </disk>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vnc</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>dbus</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </graphics>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <video supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='modelType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vga</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>cirrus</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>none</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>bochs</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>ramfb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </video>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='mode'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>subsystem</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>default</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>mandatory</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>requisite</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>optional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pci</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>scsi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </hostdev>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <rng supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>random</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>egd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>builtin</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </rng>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='driverType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>path</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>handle</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </filesystem>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>emulator</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>external</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>2.0</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </tpm>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='bus'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </redirdev>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <channel supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pty</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>unix</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </channel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>qemu</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>builtin</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </crypto>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <interface supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>default</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>passt</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </interface>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <panic supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>isa</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>hyperv</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </panic>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </devices>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <features>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <gic supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <sev supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='features'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>relaxed</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vapic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vpindex</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>runtime</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>synic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>stimer</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>reset</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>frequencies</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>ipi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>avic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </hyperv>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </features>
Oct  9 09:50:43 compute-1 nova_compute[161987]: </domainCapabilities>
Oct  9 09:50:43 compute-1 nova_compute[161987]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.053 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  9 09:50:43 compute-1 nova_compute[161987]: <domainCapabilities>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <arch>i686</arch>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <vcpu max='240'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <os supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <enum name='firmware'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <loader supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>rom</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pflash</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='readonly'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>yes</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>no</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='secure'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>no</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </loader>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </os>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <cpu>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>on</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>off</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>on</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>off</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='n270'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </cpu>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>file</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>anonymous</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>memfd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </memoryBacking>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <devices>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <disk supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>disk</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>cdrom</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>floppy</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>lun</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='bus'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>ide</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>fdc</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>scsi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>sata</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </disk>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vnc</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>dbus</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </graphics>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <video supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='modelType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vga</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>cirrus</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>none</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>bochs</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>ramfb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </video>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='mode'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>subsystem</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>default</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>mandatory</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>requisite</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>optional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pci</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>scsi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </hostdev>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <rng supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>random</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>egd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>builtin</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </rng>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='driverType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>path</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>handle</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </filesystem>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>emulator</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>external</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>2.0</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </tpm>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='bus'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </redirdev>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <channel supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pty</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>unix</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </channel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>qemu</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>builtin</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </crypto>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <interface supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>default</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>passt</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </interface>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <panic supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>isa</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>hyperv</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </panic>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </devices>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <features>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <gic supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <sev supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='features'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>relaxed</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vapic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vpindex</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>runtime</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>synic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>stimer</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>reset</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>frequencies</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>ipi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>avic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </hyperv>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </features>
Oct  9 09:50:43 compute-1 nova_compute[161987]: </domainCapabilities>
Oct  9 09:50:43 compute-1 nova_compute[161987]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.078 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.080 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  9 09:50:43 compute-1 nova_compute[161987]: <domainCapabilities>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <arch>x86_64</arch>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <vcpu max='4096'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <os supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <enum name='firmware'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>efi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <loader supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>rom</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pflash</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='readonly'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>yes</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>no</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='secure'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>yes</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>no</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </loader>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </os>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <cpu>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>on</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>off</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>on</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>off</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='n270'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </cpu>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>file</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>anonymous</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>memfd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </memoryBacking>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <devices>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <disk supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>disk</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>cdrom</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>floppy</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>lun</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='bus'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>fdc</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>scsi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>sata</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </disk>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vnc</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>dbus</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </graphics>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <video supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='modelType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vga</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>cirrus</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>none</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>bochs</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>ramfb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </video>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='mode'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>subsystem</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>default</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>mandatory</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>requisite</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>optional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pci</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>scsi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </hostdev>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <rng supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>random</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>egd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>builtin</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </rng>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='driverType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>path</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>handle</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </filesystem>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>emulator</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>external</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>2.0</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </tpm>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='bus'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </redirdev>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <channel supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pty</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>unix</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </channel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>qemu</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>builtin</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </crypto>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <interface supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>default</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>passt</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </interface>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <panic supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>isa</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>hyperv</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </panic>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </devices>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <features>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <gic supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <sev supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='features'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>relaxed</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vapic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vpindex</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>runtime</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>synic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>stimer</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>reset</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>frequencies</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>ipi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>avic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </hyperv>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </features>
Oct  9 09:50:43 compute-1 nova_compute[161987]: </domainCapabilities>
Oct  9 09:50:43 compute-1 nova_compute[161987]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.128 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  9 09:50:43 compute-1 nova_compute[161987]: <domainCapabilities>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <arch>x86_64</arch>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <vcpu max='240'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <os supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <enum name='firmware'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <loader supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>rom</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pflash</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='readonly'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>yes</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>no</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='secure'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>no</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </loader>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </os>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <cpu>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>on</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>off</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>on</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>off</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xop'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='la57'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='hle'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='n270'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='ss'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </blockers>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </mode>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </cpu>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>file</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>anonymous</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <value>memfd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </memoryBacking>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <devices>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <disk supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>disk</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>cdrom</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>floppy</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>lun</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='bus'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>ide</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>fdc</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>scsi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>sata</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </disk>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vnc</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>dbus</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </graphics>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <video supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='modelType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vga</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>cirrus</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>none</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>bochs</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>ramfb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </video>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='mode'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>subsystem</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>default</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>mandatory</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>requisite</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>optional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pci</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>scsi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </hostdev>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <rng supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>random</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>egd</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>builtin</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </rng>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='driverType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>path</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>handle</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </filesystem>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>emulator</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>external</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>2.0</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </tpm>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='bus'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>usb</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </redirdev>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <channel supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>pty</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>unix</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </channel>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='type'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>qemu</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>builtin</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </crypto>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <interface supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='backendType'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>default</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>passt</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </interface>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <panic supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='model'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>isa</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>hyperv</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </panic>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </devices>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  <features>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <gic supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <sev supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      <enum name='features'>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>relaxed</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vapic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vpindex</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>runtime</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>synic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>stimer</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>reset</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>frequencies</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>ipi</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>avic</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-1 nova_compute[161987]:      </enum>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    </hyperv>
Oct  9 09:50:43 compute-1 nova_compute[161987]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-1 nova_compute[161987]:  </features>
Oct  9 09:50:43 compute-1 nova_compute[161987]: </domainCapabilities>
Oct  9 09:50:43 compute-1 nova_compute[161987]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.164 2 DEBUG nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.165 2 INFO nova.virt.libvirt.host [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Secure Boot support detected#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.166 2 INFO nova.virt.libvirt.driver [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.166 2 INFO nova.virt.libvirt.driver [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.174 2 DEBUG nova.virt.libvirt.driver [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.202 2 INFO nova.virt.node [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Determined node identity 79aa81b0-5a5d-4643-a355-ec5461cb321a from /var/lib/nova/compute_id#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.221 2 WARNING nova.compute.manager [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Compute nodes ['79aa81b0-5a5d-4643-a355-ec5461cb321a'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.255 2 INFO nova.compute.manager [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.289 2 WARNING nova.compute.manager [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.290 2 DEBUG oslo_concurrency.lockutils [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.290 2 DEBUG oslo_concurrency.lockutils [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.290 2 DEBUG oslo_concurrency.lockutils [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.290 2 DEBUG nova.compute.resource_tracker [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.290 2 DEBUG oslo_concurrency.processutils [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:43.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:43 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:50:43 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/500541388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:50:43 compute-1 python3.9[162889]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.653 2 DEBUG oslo_concurrency.processutils [None req-d433b75c-6fcd-48db-8c1e-371c23c6c144 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:43 compute-1 systemd[1]: Stopping nova_compute container...
Oct  9 09:50:43 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Oct  9 09:50:43 compute-1 systemd[1]: Started libvirt nodedev daemon.
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.704 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.704 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:50:43 compute-1 nova_compute[161987]: 2025-10-09 09:50:43.704 2 DEBUG oslo_concurrency.lockutils [None req-fbb7ded9-2457-4730-864d-5d096bff90a3 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:50:44 compute-1 virtqemud[162526]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  9 09:50:44 compute-1 virtqemud[162526]: hostname: compute-1
Oct  9 09:50:44 compute-1 virtqemud[162526]: End of file while reading data: Input/output error
Oct  9 09:50:44 compute-1 systemd[1]: libpod-a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2.scope: Deactivated successfully.
Oct  9 09:50:44 compute-1 systemd[1]: libpod-a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2.scope: Consumed 2.739s CPU time.
Oct  9 09:50:44 compute-1 podman[162909]: 2025-10-09 09:50:44.058988758 +0000 UTC m=+0.382134737 container died a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3)
Oct  9 09:50:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d-merged.mount: Deactivated successfully.
Oct  9 09:50:44 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2-userdata-shm.mount: Deactivated successfully.
Oct  9 09:50:44 compute-1 podman[162909]: 2025-10-09 09:50:44.115746175 +0000 UTC m=+0.438892154 container cleanup a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute)
Oct  9 09:50:44 compute-1 podman[162909]: nova_compute
Oct  9 09:50:44 compute-1 podman[162952]: nova_compute
Oct  9 09:50:44 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  9 09:50:44 compute-1 systemd[1]: Stopped nova_compute container.
Oct  9 09:50:44 compute-1 systemd[1]: Starting nova_compute container...
Oct  9 09:50:44 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:50:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd554087b96bac4aba4050d44e17fa5d9c4a47a8203f9794e9d219f5f40fa59d/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-1 podman[162962]: 2025-10-09 09:50:44.233956234 +0000 UTC m=+0.060029979 container init a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:50:44 compute-1 podman[162962]: 2025-10-09 09:50:44.238872467 +0000 UTC m=+0.064946212 container start a9dd31848225f8fe6eed007d5a6504d226f8ec66f0a516f1516ecb5e7e6e18b2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, io.buildah.version=1.41.3)
Oct  9 09:50:44 compute-1 podman[162962]: nova_compute
Oct  9 09:50:44 compute-1 nova_compute[162974]: + sudo -E kolla_set_configs
Oct  9 09:50:44 compute-1 systemd[1]: Started nova_compute container.
Oct  9 09:50:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:44.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Validating config file
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying service configuration files
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /etc/ceph
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Creating directory /etc/ceph
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/ceph
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Writing out command to execute
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-1 nova_compute[162974]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-1 nova_compute[162974]: ++ cat /run_command
Oct  9 09:50:44 compute-1 nova_compute[162974]: + CMD=nova-compute
Oct  9 09:50:44 compute-1 nova_compute[162974]: + ARGS=
Oct  9 09:50:44 compute-1 nova_compute[162974]: + sudo kolla_copy_cacerts
Oct  9 09:50:44 compute-1 nova_compute[162974]: + [[ ! -n '' ]]
Oct  9 09:50:44 compute-1 nova_compute[162974]: + . kolla_extend_start
Oct  9 09:50:44 compute-1 nova_compute[162974]: Running command: 'nova-compute'
Oct  9 09:50:44 compute-1 nova_compute[162974]: + echo 'Running command: '\''nova-compute'\'''
Oct  9 09:50:44 compute-1 nova_compute[162974]: + umask 0022
Oct  9 09:50:44 compute-1 nova_compute[162974]: + exec nova-compute
Oct  9 09:50:44 compute-1 python3.9[163137]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  9 09:50:45 compute-1 systemd[1]: Started libpod-conmon-8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a.scope.
Oct  9 09:50:45 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:50:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f9119592c662942c6340251e4b10b313c1c11314b53de3faa1a0bee718f28f/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f9119592c662942c6340251e4b10b313c1c11314b53de3faa1a0bee718f28f/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f9119592c662942c6340251e4b10b313c1c11314b53de3faa1a0bee718f28f/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:45 compute-1 podman[163158]: 2025-10-09 09:50:45.037307732 +0000 UTC m=+0.072093094 container init 8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, managed_by=edpm_ansible)
Oct  9 09:50:45 compute-1 podman[163158]: 2025-10-09 09:50:45.043237186 +0000 UTC m=+0.078022537 container start 8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=nova_compute_init, managed_by=edpm_ansible)
Oct  9 09:50:45 compute-1 python3.9[163137]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Applying nova statedir ownership
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  9 09:50:45 compute-1 nova_compute_init[163176]: INFO:nova_statedir:Nova statedir ownership complete
Oct  9 09:50:45 compute-1 systemd[1]: libpod-8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a.scope: Deactivated successfully.
Oct  9 09:50:45 compute-1 conmon[163170]: conmon 8aaa249df0d42f42b8a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a.scope/container/memory.events
Oct  9 09:50:45 compute-1 podman[163190]: 2025-10-09 09:50:45.130843169 +0000 UTC m=+0.024810729 container died 8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251001)
Oct  9 09:50:45 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a-userdata-shm.mount: Deactivated successfully.
Oct  9 09:50:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-c8f9119592c662942c6340251e4b10b313c1c11314b53de3faa1a0bee718f28f-merged.mount: Deactivated successfully.
Oct  9 09:50:45 compute-1 podman[163190]: 2025-10-09 09:50:45.159659223 +0000 UTC m=+0.053626762 container cleanup 8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Oct  9 09:50:45 compute-1 podman[163189]: 2025-10-09 09:50:45.160588535 +0000 UTC m=+0.048939340 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  9 09:50:45 compute-1 systemd[1]: libpod-conmon-8aaa249df0d42f42b8a33d528efd9d6552b336ba0a73b24911cedbbe8e26ac7a.scope: Deactivated successfully.
Oct  9 09:50:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:45.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:45 compute-1 systemd[1]: session-37.scope: Deactivated successfully.
Oct  9 09:50:45 compute-1 systemd[1]: session-37.scope: Consumed 1min 59.368s CPU time.
Oct  9 09:50:45 compute-1 systemd-logind[798]: Session 37 logged out. Waiting for processes to exit.
Oct  9 09:50:45 compute-1 systemd-logind[798]: Removed session 37.
Oct  9 09:50:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:45 compute-1 nova_compute[162974]: 2025-10-09 09:50:45.990 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:45 compute-1 nova_compute[162974]: 2025-10-09 09:50:45.990 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:45 compute-1 nova_compute[162974]: 2025-10-09 09:50:45.991 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:45 compute-1 nova_compute[162974]: 2025-10-09 09:50:45.991 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.095 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.105 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:46.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.513 2 INFO nova.virt.driver [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.592 2 INFO nova.compute.provider_config [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.600 2 DEBUG oslo_concurrency.lockutils [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.600 2 DEBUG oslo_concurrency.lockutils [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.600 2 DEBUG oslo_concurrency.lockutils [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.601 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.602 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.603 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.603 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.603 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.603 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.603 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.604 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.605 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.605 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.605 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.605 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.605 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.606 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.607 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.608 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.608 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.608 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.608 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.608 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.609 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.610 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.610 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.610 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.610 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.610 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.611 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.612 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.613 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.613 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.613 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.613 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.613 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.614 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.615 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.615 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.615 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.615 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.615 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.616 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.617 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.618 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.619 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.619 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.619 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.619 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.619 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.620 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.621 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.621 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.621 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.621 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.621 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.622 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.623 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.624 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.624 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.624 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.624 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.624 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.625 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.626 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.626 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.626 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.626 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.626 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.627 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.628 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.628 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.628 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.628 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.628 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.629 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.630 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.630 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.630 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.630 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.630 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.631 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.632 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.632 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.632 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.632 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.632 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.633 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.634 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.635 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.635 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.635 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.635 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.635 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.636 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.637 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.638 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.638 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.638 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.638 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.638 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.639 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.640 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.641 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.642 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.642 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.642 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.642 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.642 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.643 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.644 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.645 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.646 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.647 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.647 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.647 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.647 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.647 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.648 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.649 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.649 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.649 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.649 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.649 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.650 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.651 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.652 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.652 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.652 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.652 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.652 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.653 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.687 2 WARNING oslo_config.cfg [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  9 09:50:46 compute-1 nova_compute[162974]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  9 09:50:46 compute-1 nova_compute[162974]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  9 09:50:46 compute-1 nova_compute[162974]: and ``live_migration_inbound_addr`` respectively.
Oct  9 09:50:46 compute-1 nova_compute[162974]: ).  Its value may be silently ignored in the future.#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rbd_secret_uuid        = 286f8bf0-da72-5823-9a4e-ac4457d9e609 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-31cc657a-5f29-40ad-81d6-8fa81443a0d9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.762 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.774 2 INFO nova.virt.node [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Determined node identity 79aa81b0-5a5d-4643-a355-ec5461cb321a from /var/lib/nova/compute_id#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.774 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.775 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.775 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.775 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.784 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4d35e92f70> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.787 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4d35e92f70> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.788 2 INFO nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.791 2 INFO nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Libvirt host capabilities <capabilities>
Oct  9 09:50:46 compute-1 nova_compute[162974]: 
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <host>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <uuid>99ca1aa4-a8fe-49f8-8019-77dd20980206</uuid>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <cpu>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <arch>x86_64</arch>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <microcode version='167776725'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <signature family='25' model='1' stepping='1'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <maxphysaddr mode='emulate' bits='48'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='x2apic'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='tsc-deadline'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='osxsave'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='hypervisor'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='tsc_adjust'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='ospke'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='vaes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='vpclmulqdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='spec-ctrl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='stibp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='arch-capabilities'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='ssbd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='cmp_legacy'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='virt-ssbd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='lbrv'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='tsc-scale'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='vmcb-clean'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='pause-filter'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='pfthreshold'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='vgif'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='rdctl-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='mds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature name='pschange-mc-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <pages unit='KiB' size='4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <pages unit='KiB' size='2048'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <pages unit='KiB' size='1048576'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </cpu>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <power_management>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <suspend_mem/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </power_management>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <iommu support='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <migration_features>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <live/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <uri_transports>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <uri_transport>tcp</uri_transport>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <uri_transport>rdma</uri_transport>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </uri_transports>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </migration_features>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <topology>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <cells num='1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <cell id='0'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:          <memory unit='KiB'>7865152</memory>
Oct  9 09:50:46 compute-1 nova_compute[162974]:          <pages unit='KiB' size='4'>1966288</pages>
Oct  9 09:50:46 compute-1 nova_compute[162974]:          <pages unit='KiB' size='2048'>0</pages>
Oct  9 09:50:46 compute-1 nova_compute[162974]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  9 09:50:46 compute-1 nova_compute[162974]:          <distances>
Oct  9 09:50:46 compute-1 nova_compute[162974]:            <sibling id='0' value='10'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:          </distances>
Oct  9 09:50:46 compute-1 nova_compute[162974]:          <cpus num='4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:          </cpus>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        </cell>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </cells>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </topology>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <cache>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </cache>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <secmodel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model>selinux</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <doi>0</doi>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </secmodel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <secmodel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model>dac</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <doi>0</doi>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </secmodel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </host>
Oct  9 09:50:46 compute-1 nova_compute[162974]: 
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <guest>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <os_type>hvm</os_type>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <arch name='i686'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <wordsize>32</wordsize>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <domain type='qemu'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <domain type='kvm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </arch>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <features>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <pae/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <nonpae/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <apic default='on' toggle='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <cpuselection/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <deviceboot/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <externalSnapshot/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </features>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </guest>
Oct  9 09:50:46 compute-1 nova_compute[162974]: 
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <guest>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <os_type>hvm</os_type>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <arch name='x86_64'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <wordsize>64</wordsize>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <domain type='qemu'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <domain type='kvm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </arch>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <features>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <apic default='on' toggle='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <cpuselection/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <deviceboot/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <externalSnapshot/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </features>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </guest>
Oct  9 09:50:46 compute-1 nova_compute[162974]: 
Oct  9 09:50:46 compute-1 nova_compute[162974]: </capabilities>
Oct  9 09:50:46 compute-1 nova_compute[162974]: #033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.797 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.798 2 DEBUG nova.virt.libvirt.volume.mount [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.801 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  9 09:50:46 compute-1 nova_compute[162974]: <domainCapabilities>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <domain>kvm</domain>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <arch>i686</arch>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <vcpu max='4096'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <iothreads supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <os supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <enum name='firmware'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <loader supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>rom</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pflash</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='readonly'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>yes</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>no</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='secure'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>no</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </loader>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </os>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <cpu>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>on</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>off</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='maximumMigratable'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>on</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>off</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='succor'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='custom' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Denverton'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Denverton-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-128'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-256'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-512'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='KnightsMill'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SierraForest'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='athlon'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='athlon-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='core2duo'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='core2duo-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='coreduo'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='coreduo-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='n270'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='n270-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='phenom'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='phenom-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </cpu>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <memoryBacking supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <enum name='sourceType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>file</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>anonymous</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>memfd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </memoryBacking>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <devices>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <disk supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='diskDevice'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>disk</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>cdrom</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>floppy</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>lun</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='bus'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>fdc</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>scsi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>sata</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <graphics supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vnc</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>egl-headless</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>dbus</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </graphics>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <video supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='modelType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vga</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>cirrus</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>none</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>bochs</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>ramfb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </video>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <hostdev supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='mode'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>subsystem</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='startupPolicy'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>default</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>mandatory</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>requisite</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>optional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='subsysType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pci</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>scsi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='capsType'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='pciBackend'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </hostdev>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <rng supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>random</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>egd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>builtin</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </rng>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <filesystem supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='driverType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>path</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>handle</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtiofs</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </filesystem>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <tpm supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tpm-tis</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tpm-crb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>emulator</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>external</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendVersion'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>2.0</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </tpm>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <redirdev supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='bus'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </redirdev>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <channel supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pty</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>unix</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </channel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <crypto supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>qemu</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>builtin</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </crypto>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <interface supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>default</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>passt</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <panic supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>isa</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>hyperv</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </panic>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </devices>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <features>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <gic supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <genid supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <backup supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <async-teardown supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <ps2 supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <sev supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <sgx supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <hyperv supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='features'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>relaxed</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vapic</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>spinlocks</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vpindex</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>runtime</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>synic</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>stimer</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>reset</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vendor_id</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>frequencies</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>reenlightenment</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tlbflush</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>ipi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>avic</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>emsr_bitmap</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>xmm_input</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </hyperv>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <launchSecurity supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </features>
Oct  9 09:50:46 compute-1 nova_compute[162974]: </domainCapabilities>
Oct  9 09:50:46 compute-1 nova_compute[162974]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.804 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  9 09:50:46 compute-1 nova_compute[162974]: <domainCapabilities>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <domain>kvm</domain>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <arch>i686</arch>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <vcpu max='240'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <iothreads supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <os supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <enum name='firmware'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <loader supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>rom</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pflash</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='readonly'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>yes</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>no</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='secure'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>no</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </loader>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </os>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <cpu>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>on</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>off</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='maximumMigratable'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>on</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>off</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='succor'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='custom' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Denverton'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Denverton-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-128'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-256'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-512'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='KnightsMill'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SierraForest'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='athlon'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='athlon-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='core2duo'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='core2duo-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='coreduo'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='coreduo-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='n270'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='n270-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='phenom'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='phenom-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </cpu>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <memoryBacking supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <enum name='sourceType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>file</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>anonymous</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>memfd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </memoryBacking>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <devices>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <disk supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='diskDevice'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>disk</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>cdrom</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>floppy</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>lun</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='bus'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>ide</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>fdc</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>scsi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>sata</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <graphics supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vnc</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>egl-headless</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>dbus</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </graphics>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <video supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='modelType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vga</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>cirrus</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>none</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>bochs</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>ramfb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </video>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <hostdev supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='mode'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>subsystem</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='startupPolicy'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>default</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>mandatory</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>requisite</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>optional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='subsysType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pci</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>scsi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='capsType'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='pciBackend'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </hostdev>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <rng supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>random</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>egd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>builtin</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </rng>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <filesystem supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='driverType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>path</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>handle</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtiofs</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </filesystem>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <tpm supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tpm-tis</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tpm-crb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>emulator</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>external</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendVersion'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>2.0</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </tpm>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <redirdev supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='bus'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </redirdev>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <channel supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pty</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>unix</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </channel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <crypto supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>qemu</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>builtin</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </crypto>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <interface supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>default</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>passt</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <panic supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>isa</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>hyperv</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </panic>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </devices>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <features>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <gic supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <genid supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <backup supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <async-teardown supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <ps2 supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <sev supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <sgx supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <hyperv supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='features'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>relaxed</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vapic</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>spinlocks</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vpindex</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>runtime</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>synic</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>stimer</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>reset</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vendor_id</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>frequencies</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>reenlightenment</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tlbflush</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>ipi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>avic</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>emsr_bitmap</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>xmm_input</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </hyperv>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <launchSecurity supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </features>
Oct  9 09:50:46 compute-1 nova_compute[162974]: </domainCapabilities>
Oct  9 09:50:46 compute-1 nova_compute[162974]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.817 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.820 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  9 09:50:46 compute-1 nova_compute[162974]: <domainCapabilities>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <domain>kvm</domain>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <arch>x86_64</arch>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <vcpu max='4096'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <iothreads supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <os supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <enum name='firmware'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>efi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <loader supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>rom</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pflash</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='readonly'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>yes</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>no</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='secure'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>yes</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>no</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </loader>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </os>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <cpu>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>on</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>off</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='maximumMigratable'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>on</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>off</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='succor'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='custom' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Denverton'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Denverton-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-128'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-256'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-512'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='KnightsMill'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SierraForest'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='athlon'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='athlon-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='core2duo'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='core2duo-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='coreduo'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='coreduo-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='n270'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='n270-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='phenom'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='phenom-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </cpu>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <memoryBacking supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <enum name='sourceType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>file</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>anonymous</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>memfd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </memoryBacking>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <devices>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <disk supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='diskDevice'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>disk</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>cdrom</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>floppy</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>lun</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='bus'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>fdc</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>scsi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>sata</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <graphics supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vnc</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>egl-headless</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>dbus</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </graphics>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <video supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='modelType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vga</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>cirrus</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>none</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>bochs</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>ramfb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </video>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <hostdev supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='mode'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>subsystem</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='startupPolicy'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>default</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>mandatory</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>requisite</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>optional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='subsysType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pci</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>scsi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='capsType'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='pciBackend'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </hostdev>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <rng supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>random</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>egd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>builtin</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </rng>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <filesystem supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='driverType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>path</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>handle</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtiofs</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </filesystem>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <tpm supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tpm-tis</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tpm-crb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>emulator</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>external</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendVersion'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>2.0</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </tpm>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <redirdev supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='bus'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </redirdev>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <channel supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pty</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>unix</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </channel>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <crypto supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>qemu</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>builtin</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </crypto>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <interface supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>default</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>passt</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <panic supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>isa</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>hyperv</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </panic>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </devices>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <features>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <gic supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <genid supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <backup supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <async-teardown supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <ps2 supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <sev supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <sgx supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <hyperv supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='features'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>relaxed</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vapic</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>spinlocks</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vpindex</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>runtime</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>synic</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>stimer</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>reset</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vendor_id</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>frequencies</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>reenlightenment</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tlbflush</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>ipi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>avic</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>emsr_bitmap</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>xmm_input</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </hyperv>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <launchSecurity supported='no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </features>
Oct  9 09:50:46 compute-1 nova_compute[162974]: </domainCapabilities>
Oct  9 09:50:46 compute-1 nova_compute[162974]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:46 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.866 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  9 09:50:46 compute-1 nova_compute[162974]: <domainCapabilities>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <domain>kvm</domain>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <arch>x86_64</arch>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <vcpu max='240'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <iothreads supported='yes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <os supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <enum name='firmware'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <loader supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>rom</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pflash</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='readonly'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>yes</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>no</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='secure'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>no</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </loader>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </os>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <cpu>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>on</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>off</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='maximumMigratable'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>on</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>off</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='succor'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <mode name='custom' supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Denverton'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Denverton-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-128'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-256'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx10-512'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Haswell-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='KnightsMill'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xop'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='la57'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SierraForest'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='hle'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='athlon'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='athlon-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='core2duo'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='core2duo-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='coreduo'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='coreduo-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='n270'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='n270-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='ss'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='phenom'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <blockers model='phenom-v1'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </blockers>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </mode>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </cpu>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <memoryBacking supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <enum name='sourceType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>file</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>anonymous</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <value>memfd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  </memoryBacking>
Oct  9 09:50:46 compute-1 nova_compute[162974]:  <devices>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <disk supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='diskDevice'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>disk</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>cdrom</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>floppy</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>lun</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='bus'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>ide</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>fdc</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>scsi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>sata</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <graphics supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vnc</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>egl-headless</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>dbus</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </graphics>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <video supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='modelType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>vga</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>cirrus</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>none</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>bochs</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>ramfb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </video>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <hostdev supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='mode'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>subsystem</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='startupPolicy'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>default</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>mandatory</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>requisite</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>optional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='subsysType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pci</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>scsi</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='capsType'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='pciBackend'/>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </hostdev>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <rng supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>random</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>egd</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>builtin</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </rng>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <filesystem supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='driverType'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>path</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>handle</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>virtiofs</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </filesystem>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <tpm supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tpm-tis</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>tpm-crb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>emulator</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>external</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='backendVersion'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>2.0</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </tpm>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <redirdev supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='bus'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>usb</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    </redirdev>
Oct  9 09:50:46 compute-1 nova_compute[162974]:    <channel supported='yes'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>pty</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:        <value>unix</value>
Oct  9 09:50:46 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    </channel>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <crypto supported='yes'>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      <enum name='model'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      <enum name='type'>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>qemu</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      <enum name='backendModel'>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>builtin</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    </crypto>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <interface supported='yes'>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      <enum name='backendType'>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>default</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>passt</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <panic supported='yes'>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      <enum name='model'>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>isa</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>hyperv</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    </panic>
Oct  9 09:50:47 compute-1 nova_compute[162974]:  </devices>
Oct  9 09:50:47 compute-1 nova_compute[162974]:  <features>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <gic supported='no'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <genid supported='yes'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <backup supported='yes'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <async-teardown supported='yes'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <ps2 supported='yes'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <sev supported='no'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <sgx supported='no'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <hyperv supported='yes'>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      <enum name='features'>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>relaxed</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>vapic</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>spinlocks</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>vpindex</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>runtime</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>synic</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>stimer</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>reset</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>vendor_id</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>frequencies</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>reenlightenment</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>tlbflush</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>ipi</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>avic</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>emsr_bitmap</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:        <value>xmm_input</value>
Oct  9 09:50:47 compute-1 nova_compute[162974]:      </enum>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    </hyperv>
Oct  9 09:50:47 compute-1 nova_compute[162974]:    <launchSecurity supported='no'/>
Oct  9 09:50:47 compute-1 nova_compute[162974]:  </features>
Oct  9 09:50:47 compute-1 nova_compute[162974]: </domainCapabilities>
Oct  9 09:50:47 compute-1 nova_compute[162974]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.916 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.916 2 INFO nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Secure Boot support detected#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.917 2 INFO nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.918 2 INFO nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.923 2 DEBUG nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.953 2 INFO nova.virt.node [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Determined node identity 79aa81b0-5a5d-4643-a355-ec5461cb321a from /var/lib/nova/compute_id#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.961 2 WARNING nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Compute nodes ['79aa81b0-5a5d-4643-a355-ec5461cb321a'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.978 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.988 2 WARNING nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.988 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.988 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.988 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.988 2 DEBUG nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:46.989 2 DEBUG oslo_concurrency.processutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:50:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1952717457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.336 2 DEBUG oslo_concurrency.processutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:47.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.528 2 WARNING nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.529 2 DEBUG nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5360MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.529 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.529 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.556 2 WARNING nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] No compute node record for compute-1.ctlplane.example.com:79aa81b0-5a5d-4643-a355-ec5461cb321a: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 79aa81b0-5a5d-4643-a355-ec5461cb321a could not be found.#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.578 2 INFO nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: 79aa81b0-5a5d-4643-a355-ec5461cb321a#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.664 2 DEBUG nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:50:47 compute-1 nova_compute[162974]: 2025-10-09 09:50:47.664 2 DEBUG nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.134 2 INFO nova.scheduler.client.report [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [req-5dda1e5e-310a-42a6-a769-80397a654cd1] Created resource provider record via placement API for resource provider with UUID 79aa81b0-5a5d-4643-a355-ec5461cb321a and name compute-1.ctlplane.example.com.#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.186 2 DEBUG oslo_concurrency.processutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:48.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:50:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2233634120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.532 2 DEBUG oslo_concurrency.processutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.535 2 DEBUG nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  9 09:50:48 compute-1 nova_compute[162974]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.535 2 INFO nova.virt.libvirt.host [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.536 2 DEBUG nova.compute.provider_tree [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.537 2 DEBUG nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.574 2 DEBUG nova.scheduler.client.report [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Updated inventory for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.574 2 DEBUG nova.compute.provider_tree [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Updating resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.574 2 DEBUG nova.compute.provider_tree [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.658 2 DEBUG nova.compute.provider_tree [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Updating resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.678 2 DEBUG nova.compute.resource_tracker [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.679 2 DEBUG oslo_concurrency.lockutils [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.679 2 DEBUG nova.service [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.735 2 DEBUG nova.service [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  9 09:50:48 compute-1 nova_compute[162974]: 2025-10-09 09:50:48.735 2 DEBUG nova.servicegroup.drivers.db [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  9 09:50:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:50:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:49.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:50:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:50.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:51.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:50:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:50:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:53.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:54.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:54 compute-1 podman[163353]: 2025-10-09 09:50:54.549890436 +0000 UTC m=+0.056505193 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 09:50:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:55.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:55 compute-1 systemd[1]: Stopping User Manager for UID 1000...
Oct  9 09:50:55 compute-1 systemd[1268]: Activating special unit Exit the Session...
Oct  9 09:50:55 compute-1 systemd[1268]: Removed slice User Background Tasks Slice.
Oct  9 09:50:55 compute-1 systemd[1268]: Stopped target Main User Target.
Oct  9 09:50:55 compute-1 systemd[1268]: Stopped target Basic System.
Oct  9 09:50:55 compute-1 systemd[1268]: Stopped target Paths.
Oct  9 09:50:55 compute-1 systemd[1268]: Stopped target Sockets.
Oct  9 09:50:55 compute-1 systemd[1268]: Stopped target Timers.
Oct  9 09:50:55 compute-1 systemd[1268]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:50:55 compute-1 systemd[1268]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:50:55 compute-1 systemd[1268]: Closed D-Bus User Message Bus Socket.
Oct  9 09:50:55 compute-1 systemd[1268]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:50:55 compute-1 systemd[1268]: Removed slice User Application Slice.
Oct  9 09:50:55 compute-1 systemd[1268]: Reached target Shutdown.
Oct  9 09:50:55 compute-1 systemd[1268]: Finished Exit the Session.
Oct  9 09:50:55 compute-1 systemd[1268]: Reached target Exit the Session.
Oct  9 09:50:55 compute-1 systemd[1]: user@1000.service: Deactivated successfully.
Oct  9 09:50:55 compute-1 systemd[1]: Stopped User Manager for UID 1000.
Oct  9 09:50:55 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct  9 09:50:55 compute-1 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct  9 09:50:55 compute-1 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct  9 09:50:55 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct  9 09:50:55 compute-1 systemd[1]: Removed slice User Slice of UID 1000.
Oct  9 09:50:55 compute-1 systemd[1]: user-1000.slice: Consumed 8min 7.321s CPU time.
Oct  9 09:50:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:56.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:57.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:58.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:50:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:59.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:00.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:01.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:02.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:03.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:04.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:04 compute-1 podman[163383]: 2025-10-09 09:51:04.547317225 +0000 UTC m=+0.060192457 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:51:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:05.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:06.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:07.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:08.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 09:51:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195742608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 09:51:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 09:51:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2195742608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 09:51:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:09.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 09:51:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2714317801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 09:51:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 09:51:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2714317801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 09:51:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:51:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:51:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:51:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:51:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:51:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:51:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:10.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:11.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:12.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:12 compute-1 podman[163429]: 2025-10-09 09:51:12.525454031 +0000 UTC m=+0.038029732 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:51:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:13.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:14.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:15.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:15 compute-1 podman[163447]: 2025-10-09 09:51:15.554756066 +0000 UTC m=+0.067290865 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:51:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:16.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:17.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:18.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:19.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:20.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:21.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:22.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:23.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:24.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:25.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:25 compute-1 podman[163470]: 2025-10-09 09:51:25.55226529 +0000 UTC m=+0.065315009 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  9 09:51:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:27.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:28.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:28 compute-1 nova_compute[162974]: 2025-10-09 09:51:28.736 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:28 compute-1 nova_compute[162974]: 2025-10-09 09:51:28.761 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:29.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:30.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.232192) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491232216, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 759, "num_deletes": 250, "total_data_size": 1519242, "memory_usage": 1544856, "flush_reason": "Manual Compaction"}
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491234754, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 675072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17828, "largest_seqno": 18582, "table_properties": {"data_size": 671902, "index_size": 1014, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8252, "raw_average_key_size": 20, "raw_value_size": 665286, "raw_average_value_size": 1614, "num_data_blocks": 44, "num_entries": 412, "num_filter_entries": 412, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003438, "oldest_key_time": 1760003438, "file_creation_time": 1760003491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 2583 microseconds, and 1908 cpu microseconds.
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.234776) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 675072 bytes OK
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.234787) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235083) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235093) EVENT_LOG_v1 {"time_micros": 1760003491235090, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235104) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1515203, prev total WAL file size 1515203, number of live WAL files 2.
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235709) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(659KB)], [30(14MB)]
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491235728, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 15919404, "oldest_snapshot_seqno": -1}
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4915 keys, 12159040 bytes, temperature: kUnknown
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491270875, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12159040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12125407, "index_size": 20275, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 123535, "raw_average_key_size": 25, "raw_value_size": 12035280, "raw_average_value_size": 2448, "num_data_blocks": 847, "num_entries": 4915, "num_filter_entries": 4915, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.271098) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12159040 bytes
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.271491) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 452.3 rd, 345.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.5 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(41.6) write-amplify(18.0) OK, records in: 5407, records dropped: 492 output_compression: NoCompression
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.271509) EVENT_LOG_v1 {"time_micros": 1760003491271503, "job": 16, "event": "compaction_finished", "compaction_time_micros": 35199, "compaction_time_cpu_micros": 19138, "output_level": 6, "num_output_files": 1, "total_output_size": 12159040, "num_input_records": 5407, "num_output_records": 4915, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491271710, "job": 16, "event": "table_file_deletion", "file_number": 32}
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491273405, "job": 16, "event": "table_file_deletion", "file_number": 30}
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.273448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.273451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.273452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.273453) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:51:31 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:51:31.273454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:51:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:32.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:34.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:51:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:35.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:51:35 compute-1 podman[163523]: 2025-10-09 09:51:35.535285585 +0000 UTC m=+0.043745394 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:51:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:36.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:37.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:38.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:40.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:41.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:42.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:43 compute-1 podman[163624]: 2025-10-09 09:51:43.533674413 +0000 UTC m=+0.042306615 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:51:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:44.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:51:44 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:51:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.167 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.167 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.167 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.167 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.167 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.168 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.168 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.168 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.168 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.210 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.210 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.210 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.210 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.211 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:51:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:51:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:46.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:51:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:51:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3617190942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.605 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:51:46 compute-1 podman[163661]: 2025-10-09 09:51:46.620351914 +0000 UTC m=+0.125135066 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.842 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.844 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5414MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.844 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:51:46 compute-1 nova_compute[162974]: 2025-10-09 09:51:46.845 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:51:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:47.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.586 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.586 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.612 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:51:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:51:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:51:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:51:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2037733150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.955 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.961 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.984 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.985 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:51:47 compute-1 nova_compute[162974]: 2025-10-09 09:51:47.985 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:51:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:48.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:51:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:51:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:52.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:53.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:51:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:54.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:51:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:55.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:56.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:56 compute-1 podman[163757]: 2025-10-09 09:51:56.57813262 +0000 UTC m=+0.088025967 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 09:51:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:57.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:58.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:51:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:59.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:00.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:52:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:01.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:52:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:02.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:03.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:52:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:04.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:52:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:05.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:06.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:06 compute-1 podman[163785]: 2025-10-09 09:52:06.554499124 +0000 UTC m=+0.066832657 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:52:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:07.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:52:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:08.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:52:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:52:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:09.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:52:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:52:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:52:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:52:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:52:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:52:10.030 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:52:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:10.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:52:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:11.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:52:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:12.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:13.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:14.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:14 compute-1 podman[163831]: 2025-10-09 09:52:14.523183336 +0000 UTC m=+0.035245826 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:52:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct  9 09:52:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2909389839' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  9 09:52:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct  9 09:52:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/247613377' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  9 09:52:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:52:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:15.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:52:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:17 compute-1 podman[163849]: 2025-10-09 09:52:17.532870688 +0000 UTC m=+0.045263075 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Oct  9 09:52:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:17.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:52:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:18.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:52:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:21.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:22.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:52:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:52:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:26.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:27 compute-1 podman[163871]: 2025-10-09 09:52:27.543309987 +0000 UTC m=+0.056005422 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct  9 09:52:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:27.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:28.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:30.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:31.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:52:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:32.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:52:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:52:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:52:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:34.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:52:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:52:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:36.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:37 compute-1 podman[163925]: 2025-10-09 09:52:37.531347645 +0000 UTC m=+0.042624362 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:52:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:52:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:38.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:52:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:39.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:41.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:42.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:52:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:43.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:52:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:44.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:45 compute-1 podman[163946]: 2025-10-09 09:52:45.525225929 +0000 UTC m=+0.038016088 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:52:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:45.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:46.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:47 compute-1 podman[163987]: 2025-10-09 09:52:47.887074126 +0000 UTC m=+0.035558085 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001)
Oct  9 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.979 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.980 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.991 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.991 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.991 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.991 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.991 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-1 nova_compute[162974]: 2025-10-09 09:52:47.992 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.113 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.113 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.127 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.127 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.128 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.140 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.140 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:52:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:48.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:52:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/942557203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.482 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.656 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.657 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5411MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.657 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.657 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.701 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.701 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:52:48 compute-1 nova_compute[162974]: 2025-10-09 09:52:48.715 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:52:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:52:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3099014491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:52:49 compute-1 nova_compute[162974]: 2025-10-09 09:52:49.046 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:52:49 compute-1 nova_compute[162974]: 2025-10-09 09:52:49.049 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:52:49 compute-1 nova_compute[162974]: 2025-10-09 09:52:49.062 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:52:49 compute-1 nova_compute[162974]: 2025-10-09 09:52:49.063 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:52:49 compute-1 nova_compute[162974]: 2025-10-09 09:52:49.063 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:52:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:52:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:49.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:52:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:50.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:51.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:52.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:53.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:54.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:55.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:56.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:52:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:57.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:52:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:58.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:58 compute-1 podman[164158]: 2025-10-09 09:52:58.551323722 +0000 UTC m=+0.062495563 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:52:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:52:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:59.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:00.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:01.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:02 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:02.047 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:53:02 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:02.047 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:53:02 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:02.048 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:53:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:03.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:53:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:05.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:53:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:06.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:07.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:08.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:08 compute-1 podman[164189]: 2025-10-09 09:53:08.52649488 +0000 UTC m=+0.038667827 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:53:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:53:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:09.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:53:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:10.029 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:53:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:10.030 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:53:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:53:10.030 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:53:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:10.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:53:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:11.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:53:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:12.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:13.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:53:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:14.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:53:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:15.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.262014) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596262034, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1313, "num_deletes": 256, "total_data_size": 3197999, "memory_usage": 3247496, "flush_reason": "Manual Compaction"}
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596267670, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2067926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18587, "largest_seqno": 19895, "table_properties": {"data_size": 2062313, "index_size": 2944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11873, "raw_average_key_size": 18, "raw_value_size": 2050859, "raw_average_value_size": 3270, "num_data_blocks": 132, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003492, "oldest_key_time": 1760003492, "file_creation_time": 1760003596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5701 microseconds, and 4094 cpu microseconds.
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.267708) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2067926 bytes OK
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.267725) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268080) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268091) EVENT_LOG_v1 {"time_micros": 1760003596268088, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3191684, prev total WAL file size 3191684, number of live WAL files 2.
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268627) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2019KB)], [33(11MB)]
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596268656, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14226966, "oldest_snapshot_seqno": -1}
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5016 keys, 13755117 bytes, temperature: kUnknown
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596306599, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 13755117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13719940, "index_size": 21563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126919, "raw_average_key_size": 25, "raw_value_size": 13626990, "raw_average_value_size": 2716, "num_data_blocks": 890, "num_entries": 5016, "num_filter_entries": 5016, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.306767) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13755117 bytes
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.307211) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 374.6 rd, 362.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.6 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(13.5) write-amplify(6.7) OK, records in: 5542, records dropped: 526 output_compression: NoCompression
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.307223) EVENT_LOG_v1 {"time_micros": 1760003596307218, "job": 18, "event": "compaction_finished", "compaction_time_micros": 37980, "compaction_time_cpu_micros": 18793, "output_level": 6, "num_output_files": 1, "total_output_size": 13755117, "num_input_records": 5542, "num_output_records": 5016, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596307482, "job": 18, "event": "table_file_deletion", "file_number": 35}
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596309063, "job": 18, "event": "table_file_deletion", "file_number": 33}
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:16.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:16 compute-1 podman[164235]: 2025-10-09 09:53:16.523141932 +0000 UTC m=+0.036110256 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true)
Oct  9 09:53:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:17.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:18.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:18 compute-1 podman[164252]: 2025-10-09 09:53:18.558255381 +0000 UTC m=+0.068695868 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001)
Oct  9 09:53:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:19.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:53:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:20.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:53:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:21.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:22.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:23.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:24.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:25.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:26.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:27.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:28.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:29 compute-1 podman[164274]: 2025-10-09 09:53:29.541598284 +0000 UTC m=+0.053030415 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:53:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:29.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:30.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:31.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:32.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:33.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:53:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:34.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:53:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:35.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:36.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:38.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:39 compute-1 podman[164327]: 2025-10-09 09:53:39.524185321 +0000 UTC m=+0.037441555 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:53:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:39.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:40.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:42.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:44.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:45.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:46.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:47 compute-1 podman[164348]: 2025-10-09 09:53:47.523304351 +0000 UTC m=+0.036366059 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  9 09:53:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:47.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.050 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.050 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.050 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.129 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.129 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.129 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.155 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.155 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.155 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.155 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.156 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:53:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:53:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1414105849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:53:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:48.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.490 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.668 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.669 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5389MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.669 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.669 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.725 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.725 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:53:48 compute-1 nova_compute[162974]: 2025-10-09 09:53:48.745 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:53:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:53:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1280325955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:53:49 compute-1 nova_compute[162974]: 2025-10-09 09:53:49.078 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:53:49 compute-1 nova_compute[162974]: 2025-10-09 09:53:49.081 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:53:49 compute-1 nova_compute[162974]: 2025-10-09 09:53:49.094 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:53:49 compute-1 nova_compute[162974]: 2025-10-09 09:53:49.095 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:53:49 compute-1 nova_compute[162974]: 2025-10-09 09:53:49.095 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:53:49 compute-1 podman[164409]: 2025-10-09 09:53:49.526199583 +0000 UTC m=+0.036260990 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3)
Oct  9 09:53:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:49.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:50 compute-1 nova_compute[162974]: 2025-10-09 09:53:50.081 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:50 compute-1 nova_compute[162974]: 2025-10-09 09:53:50.081 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:50 compute-1 nova_compute[162974]: 2025-10-09 09:53:50.081 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:50 compute-1 nova_compute[162974]: 2025-10-09 09:53:50.082 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:53:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:50.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:51.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.002000022s ======
Oct  9 09:53:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:52.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000022s
Oct  9 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:52 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:53:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:53.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:54.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:55.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:56.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:57.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:58.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:53:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:00.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:00 compute-1 podman[164560]: 2025-10-09 09:54:00.567324662 +0000 UTC m=+0.067409308 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:54:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:01.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:02.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:03.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:04.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:05.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:06.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:07.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:08.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.713767) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649713813, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 789, "num_deletes": 251, "total_data_size": 1568977, "memory_usage": 1594880, "flush_reason": "Manual Compaction"}
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649717393, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1032670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19900, "largest_seqno": 20684, "table_properties": {"data_size": 1028915, "index_size": 1535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8578, "raw_average_key_size": 19, "raw_value_size": 1021376, "raw_average_value_size": 2316, "num_data_blocks": 68, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003597, "oldest_key_time": 1760003597, "file_creation_time": 1760003649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3642 microseconds, and 2805 cpu microseconds.
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717415) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1032670 bytes OK
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717426) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.717993) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.718005) EVENT_LOG_v1 {"time_micros": 1760003649718001, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.718015) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1564823, prev total WAL file size 1564823, number of live WAL files 2.
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.718379) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1008KB)], [36(13MB)]
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649718400, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 14787787, "oldest_snapshot_seqno": -1}
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4941 keys, 12621603 bytes, temperature: kUnknown
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649754636, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 12621603, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12587855, "index_size": 20262, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 125985, "raw_average_key_size": 25, "raw_value_size": 12497083, "raw_average_value_size": 2529, "num_data_blocks": 833, "num_entries": 4941, "num_filter_entries": 4941, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.754916) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12621603 bytes
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.755290) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 406.0 rd, 346.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.1 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(26.5) write-amplify(12.2) OK, records in: 5457, records dropped: 516 output_compression: NoCompression
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.755303) EVENT_LOG_v1 {"time_micros": 1760003649755297, "job": 20, "event": "compaction_finished", "compaction_time_micros": 36422, "compaction_time_cpu_micros": 19733, "output_level": 6, "num_output_files": 1, "total_output_size": 12621603, "num_input_records": 5457, "num_output_records": 4941, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649755872, "job": 20, "event": "table_file_deletion", "file_number": 38}
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649757885, "job": 20, "event": "table_file_deletion", "file_number": 36}
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.718347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.758011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.758016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.758018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.758019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:54:09.758020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:10.031 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:54:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:10.031 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:54:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:10.031 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:54:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:10.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:10 compute-1 podman[164588]: 2025-10-09 09:54:10.543310356 +0000 UTC m=+0.045256437 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:54:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:11.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:12.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:13.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:14.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:15.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:16.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:17.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:18.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:18 compute-1 podman[164635]: 2025-10-09 09:54:18.541251536 +0000 UTC m=+0.042355586 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  9 09:54:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:19.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:20.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:20 compute-1 podman[164653]: 2025-10-09 09:54:20.532683379 +0000 UTC m=+0.036890507 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  9 09:54:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:21.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:22.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:23.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:24.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:25.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:26.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:28.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - - [09/Oct/2025:09:54:28.650 +0000] "GET /swift/info HTTP/1.1" 200 539 - "python-urllib3/1.26.5" - latency=0.000000000s
Oct  9 09:54:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:29.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:30 compute-1 systemd[1]: packagekit.service: Deactivated successfully.
Oct  9 09:54:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:30.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:31 compute-1 podman[164699]: 2025-10-09 09:54:31.327253001 +0000 UTC m=+0.060011194 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  9 09:54:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:32.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:32 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Oct  9 09:54:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:33.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:33 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Oct  9 09:54:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:34.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:34 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Oct  9 09:54:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:35.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Oct  9 09:54:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:36.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:37.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:38.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:39.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:40.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:41 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Oct  9 09:54:41 compute-1 podman[164729]: 2025-10-09 09:54:41.534293165 +0000 UTC m=+0.040872609 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:54:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:41.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:42.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:43.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:44.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:45.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:46.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:47 compute-1 nova_compute[162974]: 2025-10-09 09:54:47.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:47 compute-1 nova_compute[162974]: 2025-10-09 09:54:47.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:47.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:48 compute-1 nova_compute[162974]: 2025-10-09 09:54:48.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:48.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.185 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.185 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.185 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.185 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.185 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:54:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:54:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882083905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:54:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:54:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3874590807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:54:49 compute-1 podman[164770]: 2025-10-09 09:54:49.53225795 +0000 UTC m=+0.042135671 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.534 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.727 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.728 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5430MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.728 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.729 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:54:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:49.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.803 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.804 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:54:49 compute-1 nova_compute[162974]: 2025-10-09 09:54:49.828 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:54:50 compute-1 nova_compute[162974]: 2025-10-09 09:54:50.167 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:54:50 compute-1 nova_compute[162974]: 2025-10-09 09:54:50.170 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:54:50 compute-1 nova_compute[162974]: 2025-10-09 09:54:50.185 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:54:50 compute-1 nova_compute[162974]: 2025-10-09 09:54:50.186 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:54:50 compute-1 nova_compute[162974]: 2025-10-09 09:54:50.187 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:54:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:50.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:51 compute-1 nova_compute[162974]: 2025-10-09 09:54:51.186 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:51 compute-1 nova_compute[162974]: 2025-10-09 09:54:51.187 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:54:51 compute-1 nova_compute[162974]: 2025-10-09 09:54:51.187 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:54:51 compute-1 nova_compute[162974]: 2025-10-09 09:54:51.199 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:54:51 compute-1 nova_compute[162974]: 2025-10-09 09:54:51.199 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:51 compute-1 podman[164834]: 2025-10-09 09:54:51.392502629 +0000 UTC m=+0.069863076 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible)
Oct  9 09:54:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:54:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 9273 writes, 35K keys, 9273 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9273 writes, 2281 syncs, 4.07 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 860 writes, 1592 keys, 860 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s#012Interval WAL: 860 writes, 406 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  9 09:54:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:51.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:52 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:52.051 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:54:52 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:52.052 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:54:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:52.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:53.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:54.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:55.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:56.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:54:57.054 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:54:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:57.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:58.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:54:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:59.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:00.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:55:01 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:55:01 compute-1 podman[164962]: 2025-10-09 09:55:01.573314443 +0000 UTC m=+0.083667140 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  9 09:55:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:02.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:03.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:04.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:05.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:06.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:07.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:08.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:09.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:10.033 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:10.033 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:10.033 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:10.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:11.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:12 compute-1 podman[165015]: 2025-10-09 09:55:12.525228165 +0000 UTC m=+0.036430240 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  9 09:55:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:12.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:13.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:14.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Oct  9 09:55:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:55:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3895 writes, 21K keys, 3895 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s#012Cumulative WAL: 3895 writes, 3895 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1462 writes, 7096 keys, 1462 commit groups, 1.0 writes per commit group, ingest: 16.83 MB, 0.03 MB/s#012Interval WAL: 1462 writes, 1462 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    449.0      0.07              0.05        10    0.007       0      0       0.0       0.0#012  L6      1/0   12.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    438.6    372.1      0.31              0.17         9    0.034     42K   4797       0.0       0.0#012 Sum      1/0   12.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    355.2    386.8      0.38              0.22        19    0.020     42K   4797       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.5    344.8    350.5      0.18              0.10         8    0.022     22K   2557       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    438.6    372.1      0.31              0.17         9    0.034     42K   4797       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    453.8      0.07              0.05         9    0.008       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 0.4 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e4b55c29b0#2 capacity: 304.00 MB usage: 8.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(485,7.81 MB,2.57052%) FilterBlock(19,127.80 KB,0.0410532%) IndexBlock(19,240.41 KB,0.0772275%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 09:55:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:15.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Oct  9 09:55:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:16.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:17.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:18.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:19.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:20 compute-1 podman[165036]: 2025-10-09 09:55:20.529249983 +0000 UTC m=+0.038949712 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 09:55:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:20.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Oct  9 09:55:21 compute-1 podman[165054]: 2025-10-09 09:55:21.525969183 +0000 UTC m=+0.037218658 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct  9 09:55:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:21.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:22.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:23.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:24.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:25.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:26.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:27.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:28.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:29.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:30.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:31 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  9 09:55:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:31.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:32 compute-1 podman[165102]: 2025-10-09 09:55:32.548265166 +0000 UTC m=+0.060499997 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  9 09:55:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:32.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:33.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:34.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:35.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:36.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:37.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:38.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:39.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:40.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:42.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:43 compute-1 podman[165131]: 2025-10-09 09:55:43.52441749 +0000 UTC m=+0.036137318 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  9 09:55:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:44.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  9 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.128 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  9 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.129 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.129 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  9 09:55:46 compute-1 nova_compute[162974]: 2025-10-09 09:55:46.136 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:46.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:47.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:48 compute-1 nova_compute[162974]: 2025-10-09 09:55:48.138 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:48.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.035 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.036 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.047 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.131 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.131 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.141 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.142 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.146 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.147 2 INFO nova.compute.claims [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.206 2 DEBUG nova.scheduler.client.report [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Refreshing inventories for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.258 2 DEBUG nova.scheduler.client.report [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updating ProviderTree inventory for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.258 2 DEBUG nova.compute.provider_tree [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.272 2 DEBUG nova.scheduler.client.report [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Refreshing aggregate associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.289 2 DEBUG nova.scheduler.client.report [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Refreshing trait associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, traits: HW_CPU_X86_AESNI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX512VAES,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.310 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:55:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2405746203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.468 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.654 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.655 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5370MB free_disk=59.94271469116211GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.655 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:55:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/292366459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.689 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.379s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.693 2 DEBUG nova.compute.provider_tree [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.704 2 DEBUG nova.scheduler.client.report [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.715 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.716 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.718 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.758 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.759 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.793 2 INFO nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.809 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance 27831bd3-a756-4807-b9da-7be12d549265 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.809 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.809 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.812 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 09:55:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:49.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.849 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.878 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.879 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.880 2 INFO nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Creating image(s)#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.899 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.916 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.935 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.937 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:49 compute-1 nova_compute[162974]: 2025-10-09 09:55:49.937 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.192 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.196 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.212 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.224 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.224 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.259 2 DEBUG nova.virt.libvirt.imagebackend [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image locations are: [{'url': 'rbd://286f8bf0-da72-5823-9a4e-ac4457d9e609/images/9546778e-959c-466e-9bef-81ace5bd1cc5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://286f8bf0-da72-5823-9a4e-ac4457d9e609/images/9546778e-959c-466e-9bef-81ace5bd1cc5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  9 09:55:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:55:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:50.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:55:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.766 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.812 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.813 2 DEBUG nova.virt.images [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] 9546778e-959c-466e-9bef-81ace5bd1cc5 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.814 2 DEBUG nova.privsep.utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.814 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.869 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.872 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.916 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted --force-share --output=json" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.917 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.933 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.935 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 27831bd3-a756-4807-b9da-7be12d549265_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.956 2 WARNING oslo_policy.policy [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.956 2 WARNING oslo_policy.policy [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  9 09:55:50 compute-1 nova_compute[162974]: 2025-10-09 09:55:50.958 2 DEBUG nova.policy [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.100 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 27831bd3-a756-4807-b9da-7be12d549265_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.140 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.193 2 DEBUG nova.objects.instance [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid 27831bd3-a756-4807-b9da-7be12d549265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.203 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.203 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Ensure instance console log exists: /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.204 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.204 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.204 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.225 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.225 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.225 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.233 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.233 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.234 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.234 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.234 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.234 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:51 compute-1 nova_compute[162974]: 2025-10-09 09:55:51.235 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:55:51 compute-1 podman[165394]: 2025-10-09 09:55:51.532122718 +0000 UTC m=+0.039652588 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:55:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:55:51 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3208036889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:55:51 compute-1 podman[165435]: 2025-10-09 09:55:51.591262679 +0000 UTC m=+0.040856940 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 09:55:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:51.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:52 compute-1 nova_compute[162974]: 2025-10-09 09:55:52.054 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Successfully created port: 89605073-2c16-4e83-a34b-96c0ad203677 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 09:55:52 compute-1 nova_compute[162974]: 2025-10-09 09:55:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:52.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:53 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:53.198 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:55:53 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:53.199 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.432 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Successfully updated port: 89605073-2c16-4e83-a34b-96c0ad203677 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.443 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.444 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.444 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.502 2 DEBUG nova.compute.manager [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-changed-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.502 2 DEBUG nova.compute.manager [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Refreshing instance network info cache due to event network-changed-89605073-2c16-4e83-a34b-96c0ad203677. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.502 2 DEBUG oslo_concurrency.lockutils [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:55:53 compute-1 nova_compute[162974]: 2025-10-09 09:55:53.565 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 09:55:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:53.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.063 2 DEBUG nova.network.neutron [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Updating instance_info_cache with network_info: [{"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.075 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.075 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Instance network_info: |[{"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.076 2 DEBUG oslo_concurrency.lockutils [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.076 2 DEBUG nova.network.neutron [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Refreshing network info cache for port 89605073-2c16-4e83-a34b-96c0ad203677 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.078 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Start _get_guest_xml network_info=[{"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.081 2 WARNING nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.087 2 DEBUG nova.virt.libvirt.host [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.087 2 DEBUG nova.virt.libvirt.host [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.089 2 DEBUG nova.virt.libvirt.host [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.090 2 DEBUG nova.virt.libvirt.host [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.090 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.090 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.091 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.091 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.091 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.091 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.092 2 DEBUG nova.virt.hardware [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.096 2 DEBUG nova.privsep.utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.096 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:55:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3156609996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.437 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.454 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.456 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:54.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:55:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2397916184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.794 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.796 2 DEBUG nova.virt.libvirt.vif [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452258331',display_name='tempest-TestNetworkBasicOps-server-452258331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452258331',id=2,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBODLXk0rzffmHcNbUDYGLfUDc9LvP6gD0Cl2kTpN0VYCCLdQjTmH7i6AAWYqub8jT4Jlgu+DRbDcF0CjszX7mILwKGtZArFBrJ9e1Ud75exDORK7fEHNnUEihiwx6WpTPg==',key_name='tempest-TestNetworkBasicOps-68447822',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-g89vp4u8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:55:49Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=27831bd3-a756-4807-b9da-7be12d549265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.796 2 DEBUG nova.network.os_vif_util [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.797 2 DEBUG nova.network.os_vif_util [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.799 2 DEBUG nova.objects.instance [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27831bd3-a756-4807-b9da-7be12d549265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.809 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] End _get_guest_xml xml=<domain type="kvm">
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <uuid>27831bd3-a756-4807-b9da-7be12d549265</uuid>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <name>instance-00000002</name>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <memory>131072</memory>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <vcpu>1</vcpu>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <metadata>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <nova:name>tempest-TestNetworkBasicOps-server-452258331</nova:name>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <nova:creationTime>2025-10-09 09:55:54</nova:creationTime>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <nova:flavor name="m1.nano">
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <nova:memory>128</nova:memory>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <nova:disk>1</nova:disk>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <nova:swap>0</nova:swap>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <nova:vcpus>1</nova:vcpus>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      </nova:flavor>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <nova:owner>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      </nova:owner>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <nova:ports>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <nova:port uuid="89605073-2c16-4e83-a34b-96c0ad203677">
Oct  9 09:55:54 compute-1 nova_compute[162974]:          <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        </nova:port>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      </nova:ports>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    </nova:instance>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  </metadata>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <sysinfo type="smbios">
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <system>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <entry name="manufacturer">RDO</entry>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <entry name="product">OpenStack Compute</entry>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <entry name="serial">27831bd3-a756-4807-b9da-7be12d549265</entry>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <entry name="uuid">27831bd3-a756-4807-b9da-7be12d549265</entry>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <entry name="family">Virtual Machine</entry>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    </system>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  </sysinfo>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <os>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <boot dev="hd"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <smbios mode="sysinfo"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  </os>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <features>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <acpi/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <apic/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <vmcoreinfo/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  </features>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <clock offset="utc">
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <timer name="hpet" present="no"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  </clock>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <cpu mode="host-model" match="exact">
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  </cpu>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  <devices>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <disk type="network" device="disk">
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/27831bd3-a756-4807-b9da-7be12d549265_disk">
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      </source>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <target dev="vda" bus="virtio"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <disk type="network" device="cdrom">
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/27831bd3-a756-4807-b9da-7be12d549265_disk.config">
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      </source>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 09:55:54 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <target dev="sda" bus="sata"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <interface type="ethernet">
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <mac address="fa:16:3e:d8:82:c8"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <mtu size="1442"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <target dev="tap89605073-2c"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <serial type="pty">
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <log file="/var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/console.log" append="off"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    </serial>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <video>
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    </video>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <input type="tablet" bus="usb"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <rng model="virtio">
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <backend model="random">/dev/urandom</backend>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    </rng>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <controller type="usb" index="0"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    <memballoon model="virtio">
Oct  9 09:55:54 compute-1 nova_compute[162974]:      <stats period="10"/>
Oct  9 09:55:54 compute-1 nova_compute[162974]:    </memballoon>
Oct  9 09:55:54 compute-1 nova_compute[162974]:  </devices>
Oct  9 09:55:54 compute-1 nova_compute[162974]: </domain>
Oct  9 09:55:54 compute-1 nova_compute[162974]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.810 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Preparing to wait for external event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.810 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.810 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.810 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.811 2 DEBUG nova.virt.libvirt.vif [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452258331',display_name='tempest-TestNetworkBasicOps-server-452258331',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452258331',id=2,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBODLXk0rzffmHcNbUDYGLfUDc9LvP6gD0Cl2kTpN0VYCCLdQjTmH7i6AAWYqub8jT4Jlgu+DRbDcF0CjszX7mILwKGtZArFBrJ9e1Ud75exDORK7fEHNnUEihiwx6WpTPg==',key_name='tempest-TestNetworkBasicOps-68447822',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-g89vp4u8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:55:49Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=27831bd3-a756-4807-b9da-7be12d549265,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.811 2 DEBUG nova.network.os_vif_util [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.811 2 DEBUG nova.network.os_vif_util [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.812 2 DEBUG os_vif [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.842 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.843 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.843 2 DEBUG ovsdbapp.backend.ovs_idl [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLOUT] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.855 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.856 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:55:54 compute-1 nova_compute[162974]: 2025-10-09 09:55:54.857 2 INFO oslo.privsep.daemon [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp8ct_gpgp/privsep.sock']#033[00m
Oct  9 09:55:55 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:55.201 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.387 2 INFO oslo.privsep.daemon [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.309 564 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.313 564 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.314 564 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.315 564 INFO oslo.privsep.daemon [-] privsep daemon running as pid 564#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89605073-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.638 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89605073-2c, col_values=(('external_ids', {'iface-id': '89605073-2c16-4e83-a34b-96c0ad203677', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:82:c8', 'vm-uuid': '27831bd3-a756-4807-b9da-7be12d549265'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:55 compute-1 NetworkManager[982]: <info>  [1760003755.6413] manager: (tap89605073-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.646 2 INFO os_vif [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c')#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.676 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.677 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.677 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:d8:82:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.677 2 INFO nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Using config drive#033[00m
Oct  9 09:55:55 compute-1 nova_compute[162974]: 2025-10-09 09:55:55.696 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:55.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.166 2 INFO nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Creating config drive at /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config#033[00m
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.170 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqwfl_n4v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.294 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqwfl_n4v" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.320 2 DEBUG nova.storage.rbd_utils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 27831bd3-a756-4807-b9da-7be12d549265_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.324 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config 27831bd3-a756-4807-b9da-7be12d549265_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.417 2 DEBUG oslo_concurrency.processutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config 27831bd3-a756-4807-b9da-7be12d549265_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.418 2 INFO nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Deleting local config drive /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265/disk.config because it was imported into RBD.#033[00m
Oct  9 09:55:56 compute-1 systemd[1]: Starting libvirt secret daemon...
Oct  9 09:55:56 compute-1 systemd[1]: Started libvirt secret daemon.
Oct  9 09:55:56 compute-1 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  9 09:55:56 compute-1 NetworkManager[982]: <info>  [1760003756.4946] manager: (tap89605073-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct  9 09:55:56 compute-1 kernel: tap89605073-2c: entered promiscuous mode
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:56 compute-1 ovn_controller[62080]: 2025-10-09T09:55:56Z|00027|binding|INFO|Claiming lport 89605073-2c16-4e83-a34b-96c0ad203677 for this chassis.
Oct  9 09:55:56 compute-1 ovn_controller[62080]: 2025-10-09T09:55:56Z|00028|binding|INFO|89605073-2c16-4e83-a34b-96c0ad203677: Claiming fa:16:3e:d8:82:c8 10.100.0.29
Oct  9 09:55:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.504 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:82:c8 10.100.0.29'], port_security=['fa:16:3e:d8:82:c8 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '27831bd3-a756-4807-b9da-7be12d549265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ff7a1970-9c22-4d50-af6e-95dd0d807999', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2077437-af43-496a-b32b-28fd39fcc898, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=89605073-2c16-4e83-a34b-96c0ad203677) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:55:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.505 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 89605073-2c16-4e83-a34b-96c0ad203677 in datapath ca25ffbc-c518-421a-acbc-33327ba74e5f bound to our chassis#033[00m
Oct  9 09:55:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.506 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca25ffbc-c518-421a-acbc-33327ba74e5f#033[00m
Oct  9 09:55:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.507 71059 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpyq1hdpfk/privsep.sock']#033[00m
Oct  9 09:55:56 compute-1 systemd-udevd[165617]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:55:56 compute-1 systemd-machined[120683]: New machine qemu-1-instance-00000002.
Oct  9 09:55:56 compute-1 NetworkManager[982]: <info>  [1760003756.5446] device (tap89605073-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:55:56 compute-1 NetworkManager[982]: <info>  [1760003756.5455] device (tap89605073-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 09:55:56 compute-1 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:56 compute-1 ovn_controller[62080]: 2025-10-09T09:55:56Z|00029|binding|INFO|Setting lport 89605073-2c16-4e83-a34b-96c0ad203677 ovn-installed in OVS
Oct  9 09:55:56 compute-1 ovn_controller[62080]: 2025-10-09T09:55:56Z|00030|binding|INFO|Setting lport 89605073-2c16-4e83-a34b-96c0ad203677 up in Southbound
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:55:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:56.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.970 2 DEBUG nova.network.neutron [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Updated VIF entry in instance network info cache for port 89605073-2c16-4e83-a34b-96c0ad203677. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.971 2 DEBUG nova.network.neutron [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Updating instance_info_cache with network_info: [{"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:55:56 compute-1 nova_compute[162974]: 2025-10-09 09:55:56.984 2 DEBUG oslo_concurrency.lockutils [req-66cb1aac-c008-4e08-accc-efe0d0d56577 req-a9f4496a-8348-4b24-8b35-e8e76f8356ac b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-27831bd3-a756-4807-b9da-7be12d549265" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.047 71059 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.048 71059 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpyq1hdpfk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  9 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.970 165637 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.973 165637 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.975 165637 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  9 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:56.975 165637 INFO oslo.privsep.daemon [-] privsep daemon running as pid 165637#033[00m
Oct  9 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.051 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[b608eb11-924c-4f67-9cd4-ca63053b0b6b]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.084 2 DEBUG nova.compute.manager [req-ec3807ff-860d-4080-b0f9-fe2fa1a61221 req-e79b9d21-041e-47bd-9fc6-ea952f14cf04 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.085 2 DEBUG oslo_concurrency.lockutils [req-ec3807ff-860d-4080-b0f9-fe2fa1a61221 req-e79b9d21-041e-47bd-9fc6-ea952f14cf04 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.085 2 DEBUG oslo_concurrency.lockutils [req-ec3807ff-860d-4080-b0f9-fe2fa1a61221 req-e79b9d21-041e-47bd-9fc6-ea952f14cf04 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.085 2 DEBUG oslo_concurrency.lockutils [req-ec3807ff-860d-4080-b0f9-fe2fa1a61221 req-e79b9d21-041e-47bd-9fc6-ea952f14cf04 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.086 2 DEBUG nova.compute.manager [req-ec3807ff-860d-4080-b0f9-fe2fa1a61221 req-e79b9d21-041e-47bd-9fc6-ea952f14cf04 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Processing event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.512 165637 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.513 165637 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:57 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:57.513 165637 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.734 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003757.7333243, 27831bd3-a756-4807-b9da-7be12d549265 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.734 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] VM Started (Lifecycle Event)#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.736 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.746 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.750 2 INFO nova.virt.libvirt.driver [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Instance spawned successfully.#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.750 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.764 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.769 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.775 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.775 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.776 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.776 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.777 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.777 2 DEBUG nova.virt.libvirt.driver [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.783 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.784 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003757.733423, 27831bd3-a756-4807-b9da-7be12d549265 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.784 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] VM Paused (Lifecycle Event)#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.801 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.804 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003757.7395015, 27831bd3-a756-4807-b9da-7be12d549265 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.804 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] VM Resumed (Lifecycle Event)#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.819 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.821 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:55:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.827 2 INFO nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Took 7.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.828 2 DEBUG nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.835 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.870 2 INFO nova.compute.manager [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Took 8.79 seconds to build instance.#033[00m
Oct  9 09:55:57 compute-1 nova_compute[162974]: 2025-10-09 09:55:57.878 2 DEBUG oslo_concurrency.lockutils [None req-7d5aab9b-4357-46e8-9828-e19d56601951 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.117 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6038448c-c0dc-4d63-81e7-0641a4c2c7f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.118 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca25ffbc-c1 in ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.122 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca25ffbc-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.122 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[59aa6bab-e59b-4dc2-a612-2a9b6045ff45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.126 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f7771151-1424-49c4-8b84-02639c631507]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.154 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[72a0a63c-3643-421d-951a-325e0f7656bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.178 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[be0e10f7-6963-49f9-a487-fe7588dcab2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.181 71059 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpasadcvnl/privsep.sock']#033[00m
Oct  9 09:55:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:58.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.823 71059 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.825 71059 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpasadcvnl/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.720 165694 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.727 165694 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.730 165694 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.731 165694 INFO oslo.privsep.daemon [-] privsep daemon running as pid 165694#033[00m
Oct  9 09:55:58 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:58.830 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[dbc2dd00-b999-41b8-ae57-68101534b43f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.149 2 DEBUG nova.compute.manager [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.150 2 DEBUG oslo_concurrency.lockutils [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.150 2 DEBUG oslo_concurrency.lockutils [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.150 2 DEBUG oslo_concurrency.lockutils [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.150 2 DEBUG nova.compute.manager [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] No waiting events found dispatching network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:55:59 compute-1 nova_compute[162974]: 2025-10-09 09:55:59.151 2 WARNING nova.compute.manager [req-93c08c18-729a-4e80-9e06-d173bbb19ee3 req-5aef1bff-64bf-4738-b597-1d7a02e4e4b1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received unexpected event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 for instance with vm_state active and task_state None.#033[00m
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.323 165694 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.323 165694 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.323 165694 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:55:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:59.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.861 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[dd794a77-095a-41c6-b5a3-5cf07d4eb739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:59 compute-1 NetworkManager[982]: <info>  [1760003759.8673] manager: (tapca25ffbc-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.869 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[b945c2a8-02a9-4cc0-a685-3b227c6c0c88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:59 compute-1 systemd-udevd[165705]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.908 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[74fae352-bf2e-4096-bd1e-65fabc639c0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.910 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb0d761-0805-469b-885c-1afb9d2212bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:59 compute-1 NetworkManager[982]: <info>  [1760003759.9294] device (tapca25ffbc-c0): carrier: link connected
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.934 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[8357530c-2606-4d49-8011-7173822c86cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.948 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[379439f4-fc3f-4fb6-bd54-094a26831853]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca25ffbc-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:52:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 144349, 'reachable_time': 41774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 165716, 'error': None, 'target': 'ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.961 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[dcaac7a2-e2b9-4a73-a619-6142a316c0d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:52f5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 144349, 'tstamp': 144349}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 165718, 'error': None, 'target': 'ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:55:59 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:55:59.975 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[26097517-2e32-458b-929a-8b09bde036b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca25ffbc-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:52:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 144349, 'reachable_time': 41774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 165719, 'error': None, 'target': 'ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.002 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a7634eef-8631-4250-93c1-094893461b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.055 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa4f935-1e2b-48e6-ab6a-acf9a9acc21d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.059 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca25ffbc-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.059 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.060 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca25ffbc-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:00 compute-1 NetworkManager[982]: <info>  [1760003760.0641] manager: (tapca25ffbc-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct  9 09:56:00 compute-1 kernel: tapca25ffbc-c0: entered promiscuous mode
Oct  9 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.070 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca25ffbc-c0, col_values=(('external_ids', {'iface-id': 'b963e480-a7bb-4169-89b3-7559ce9e7e8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:00 compute-1 ovn_controller[62080]: 2025-10-09T09:56:00Z|00031|binding|INFO|Releasing lport b963e480-a7bb-4169-89b3-7559ce9e7e8a from this chassis (sb_readonly=0)
Oct  9 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.075 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca25ffbc-c518-421a-acbc-33327ba74e5f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca25ffbc-c518-421a-acbc-33327ba74e5f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.075 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[b783ce31-0228-4f6b-9ff2-b4415a51539e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.077 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: global
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    log         /dev/log local0 debug
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    log-tag     haproxy-metadata-proxy-ca25ffbc-c518-421a-acbc-33327ba74e5f
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    user        root
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    group       root
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    maxconn     1024
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    pidfile     /var/lib/neutron/external/pids/ca25ffbc-c518-421a-acbc-33327ba74e5f.pid.haproxy
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    daemon
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: defaults
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    log global
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    mode http
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    option httplog
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    option dontlognull
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    option http-server-close
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    option forwardfor
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    retries                 3
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    timeout http-request    30s
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    timeout connect         30s
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    timeout client          32s
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    timeout server          32s
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    timeout http-keep-alive 30s
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: listen listener
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    bind 169.254.169.254:80
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]:    http-request add-header X-OVN-Network-ID ca25ffbc-c518-421a-acbc-33327ba74e5f
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 09:56:00 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:00.079 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'env', 'PROCESS_TAG=haproxy-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca25ffbc-c518-421a-acbc-33327ba74e5f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:00 compute-1 podman[165748]: 2025-10-09 09:56:00.390184935 +0000 UTC m=+0.046313239 container create 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:56:00 compute-1 systemd[1]: Started libpod-conmon-926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6.scope.
Oct  9 09:56:00 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:56:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ad978998028dec93cc819757167a683773bfe6787359ca9c4afc87724a5a521/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:00 compute-1 podman[165748]: 2025-10-09 09:56:00.444577499 +0000 UTC m=+0.100705792 container init 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  9 09:56:00 compute-1 podman[165748]: 2025-10-09 09:56:00.448962698 +0000 UTC m=+0.105090993 container start 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:56:00 compute-1 podman[165748]: 2025-10-09 09:56:00.372272175 +0000 UTC m=+0.028400490 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:56:00 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [NOTICE]   (165765) : New worker (165767) forked
Oct  9 09:56:00 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [NOTICE]   (165765) : Loading success.
Oct  9 09:56:00 compute-1 nova_compute[162974]: 2025-10-09 09:56:00.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:00.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:01.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:56:02 compute-1 nova_compute[162974]: 2025-10-09 09:56:02.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:02.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:03 compute-1 podman[165921]: 2025-10-09 09:56:03.553123908 +0000 UTC m=+0.062909866 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  9 09:56:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:03.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:04.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:05 compute-1 nova_compute[162974]: 2025-10-09 09:56:05.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1636] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct  9 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1642] device (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1651] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct  9 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1654] device (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1661] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct  9 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1677] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct  9 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1682] device (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:56:05 compute-1 NetworkManager[982]: <info>  [1760003765.1686] device (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:56:05 compute-1 nova_compute[162974]: 2025-10-09 09:56:05.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:05 compute-1 ovn_controller[62080]: 2025-10-09T09:56:05Z|00032|binding|INFO|Releasing lport b963e480-a7bb-4169-89b3-7559ce9e7e8a from this chassis (sb_readonly=0)
Oct  9 09:56:05 compute-1 nova_compute[162974]: 2025-10-09 09:56:05.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:05 compute-1 nova_compute[162974]: 2025-10-09 09:56:05.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:05.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:06.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:07 compute-1 nova_compute[162974]: 2025-10-09 09:56:07.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:07.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:08.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:09 compute-1 ovn_controller[62080]: 2025-10-09T09:56:09Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:82:c8 10.100.0.29
Oct  9 09:56:09 compute-1 ovn_controller[62080]: 2025-10-09T09:56:09Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:82:c8 10.100.0.29
Oct  9 09:56:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:09.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:10.034 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:10.035 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:10.035 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:10 compute-1 nova_compute[162974]: 2025-10-09 09:56:10.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:10.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:11.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:12 compute-1 nova_compute[162974]: 2025-10-09 09:56:12.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:12.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:13.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:14 compute-1 podman[166002]: 2025-10-09 09:56:14.530249008 +0000 UTC m=+0.038624750 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  9 09:56:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:15 compute-1 nova_compute[162974]: 2025-10-09 09:56:15.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:15.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:16.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:17 compute-1 nova_compute[162974]: 2025-10-09 09:56:17.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:17.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.223 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.224 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.224 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.224 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.224 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.225 2 INFO nova.compute.manager [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Terminating instance#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.231 2 DEBUG nova.compute.manager [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 09:56:18 compute-1 kernel: tap89605073-2c (unregistering): left promiscuous mode
Oct  9 09:56:18 compute-1 NetworkManager[982]: <info>  [1760003778.2726] device (tap89605073-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:56:18 compute-1 ovn_controller[62080]: 2025-10-09T09:56:18Z|00033|binding|INFO|Releasing lport 89605073-2c16-4e83-a34b-96c0ad203677 from this chassis (sb_readonly=0)
Oct  9 09:56:18 compute-1 ovn_controller[62080]: 2025-10-09T09:56:18Z|00034|binding|INFO|Setting lport 89605073-2c16-4e83-a34b-96c0ad203677 down in Southbound
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:18 compute-1 ovn_controller[62080]: 2025-10-09T09:56:18Z|00035|binding|INFO|Removing iface tap89605073-2c ovn-installed in OVS
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.287 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:82:c8 10.100.0.29'], port_security=['fa:16:3e:d8:82:c8 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '27831bd3-a756-4807-b9da-7be12d549265', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ff7a1970-9c22-4d50-af6e-95dd0d807999', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2077437-af43-496a-b32b-28fd39fcc898, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=89605073-2c16-4e83-a34b-96c0ad203677) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.289 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 89605073-2c16-4e83-a34b-96c0ad203677 in datapath ca25ffbc-c518-421a-acbc-33327ba74e5f unbound from our chassis#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.290 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca25ffbc-c518-421a-acbc-33327ba74e5f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.291 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[30e07514-f643-495b-a2e4-af36b2bef7d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.292 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f namespace which is not needed anymore#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:18 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct  9 09:56:18 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 12.014s CPU time.
Oct  9 09:56:18 compute-1 systemd-machined[120683]: Machine qemu-1-instance-00000002 terminated.
Oct  9 09:56:18 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [NOTICE]   (165765) : haproxy version is 2.8.14-c23fe91
Oct  9 09:56:18 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [NOTICE]   (165765) : path to executable is /usr/sbin/haproxy
Oct  9 09:56:18 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [ALERT]    (165765) : Current worker (165767) exited with code 143 (Terminated)
Oct  9 09:56:18 compute-1 neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f[165761]: [WARNING]  (165765) : All workers exited. Exiting... (0)
Oct  9 09:56:18 compute-1 systemd[1]: libpod-926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6.scope: Deactivated successfully.
Oct  9 09:56:18 compute-1 conmon[165761]: conmon 926673072c2de5c9c8c6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6.scope/container/memory.events
Oct  9 09:56:18 compute-1 podman[166043]: 2025-10-09 09:56:18.395290384 +0000 UTC m=+0.035716796 container died 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:56:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-0ad978998028dec93cc819757167a683773bfe6787359ca9c4afc87724a5a521-merged.mount: Deactivated successfully.
Oct  9 09:56:18 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6-userdata-shm.mount: Deactivated successfully.
Oct  9 09:56:18 compute-1 podman[166043]: 2025-10-09 09:56:18.419191296 +0000 UTC m=+0.059617708 container cleanup 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:56:18 compute-1 systemd[1]: libpod-conmon-926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6.scope: Deactivated successfully.
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.458 2 INFO nova.virt.libvirt.driver [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Instance destroyed successfully.#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.459 2 DEBUG nova.objects.instance [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid 27831bd3-a756-4807-b9da-7be12d549265 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.469 2 DEBUG nova.virt.libvirt.vif [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:55:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-452258331',display_name='tempest-TestNetworkBasicOps-server-452258331',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-452258331',id=2,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBODLXk0rzffmHcNbUDYGLfUDc9LvP6gD0Cl2kTpN0VYCCLdQjTmH7i6AAWYqub8jT4Jlgu+DRbDcF0CjszX7mILwKGtZArFBrJ9e1Ud75exDORK7fEHNnUEihiwx6WpTPg==',key_name='tempest-TestNetworkBasicOps-68447822',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:55:57Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-g89vp4u8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:55:57Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=27831bd3-a756-4807-b9da-7be12d549265,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.470 2 DEBUG nova.network.os_vif_util [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "89605073-2c16-4e83-a34b-96c0ad203677", "address": "fa:16:3e:d8:82:c8", "network": {"id": "ca25ffbc-c518-421a-acbc-33327ba74e5f", "bridge": "br-int", "label": "tempest-network-smoke--826907634", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89605073-2c", "ovs_interfaceid": "89605073-2c16-4e83-a34b-96c0ad203677", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.471 2 DEBUG nova.network.os_vif_util [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.471 2 DEBUG os_vif [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89605073-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:18 compute-1 podman[166075]: 2025-10-09 09:56:18.474194191 +0000 UTC m=+0.039165117 container remove 926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.479 2 INFO os_vif [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:82:c8,bridge_name='br-int',has_traffic_filtering=True,id=89605073-2c16-4e83-a34b-96c0ad203677,network=Network(ca25ffbc-c518-421a-acbc-33327ba74e5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap89605073-2c')#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.482 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9af83b47-069f-4db0-b60d-518806193a4f]: (4, ('Thu Oct  9 09:56:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f (926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6)\n926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6\nThu Oct  9 09:56:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f (926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6)\n926673072c2de5c9c8c6b857d76c9108e40ad320e9be960ee13dc44528ac57e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.483 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[efb526eb-7744-4a1b-955b-52d12bdfc968]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.484 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca25ffbc-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:18 compute-1 kernel: tapca25ffbc-c0: left promiscuous mode
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.503 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf34723-dbe9-40b7-900a-03cb55a78d3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.518 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[5e00e52f-d53f-4502-8729-66a69e3f8699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.520 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e0a431-0d37-4d91-b7ad-a93e3aae09ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.532 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[be64e426-651d-4c68-9002-cebd5369989c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 144342, 'reachable_time': 20258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 166117, 'error': None, 'target': 'ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:18 compute-1 systemd[1]: run-netns-ovnmeta\x2dca25ffbc\x2dc518\x2d421a\x2dacbc\x2d33327ba74e5f.mount: Deactivated successfully.
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.541 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca25ffbc-c518-421a-acbc-33327ba74e5f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 09:56:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:18.541 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[6c573294-4fc3-4605-914d-4ae534e7eb90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.600 2 DEBUG nova.compute.manager [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-unplugged-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.600 2 DEBUG oslo_concurrency.lockutils [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.600 2 DEBUG oslo_concurrency.lockutils [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.601 2 DEBUG oslo_concurrency.lockutils [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.601 2 DEBUG nova.compute.manager [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] No waiting events found dispatching network-vif-unplugged-89605073-2c16-4e83-a34b-96c0ad203677 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.601 2 DEBUG nova.compute.manager [req-8926c1d5-5a45-4def-9a8e-ca47171677d0 req-56f1e27e-7b29-4834-b971-a4ba2b30adf3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-unplugged-89605073-2c16-4e83-a34b-96c0ad203677 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.660 2 INFO nova.virt.libvirt.driver [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Deleting instance files /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265_del#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.660 2 INFO nova.virt.libvirt.driver [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Deletion of /var/lib/nova/instances/27831bd3-a756-4807-b9da-7be12d549265_del complete#033[00m
Oct  9 09:56:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.710 2 DEBUG nova.virt.libvirt.host [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.711 2 INFO nova.virt.libvirt.host [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] UEFI support detected#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.712 2 INFO nova.compute.manager [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.713 2 DEBUG oslo.service.loopingcall [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.713 2 DEBUG nova.compute.manager [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 09:56:18 compute-1 nova_compute[162974]: 2025-10-09 09:56:18.713 2 DEBUG nova.network.neutron [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.109 2 DEBUG nova.network.neutron [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.220 2 INFO nova.compute.manager [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Took 0.51 seconds to deallocate network for instance.#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.253 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.253 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.288 2 DEBUG oslo_concurrency.processutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:56:19 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/12413591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.629 2 DEBUG oslo_concurrency.processutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.634 2 DEBUG nova.compute.provider_tree [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.670 2 DEBUG nova.scheduler.client.report [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updated inventory for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.670 2 DEBUG nova.compute.provider_tree [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updating resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.671 2 DEBUG nova.compute.provider_tree [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.683 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.702 2 INFO nova.scheduler.client.report [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance 27831bd3-a756-4807-b9da-7be12d549265#033[00m
Oct  9 09:56:19 compute-1 nova_compute[162974]: 2025-10-09 09:56:19.747 2 DEBUG oslo_concurrency.lockutils [None req-2e3932cd-c0d4-4120-aa46-027f3d40f89d 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:19.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.673 2 DEBUG nova.compute.manager [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.674 2 DEBUG oslo_concurrency.lockutils [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "27831bd3-a756-4807-b9da-7be12d549265-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.674 2 DEBUG oslo_concurrency.lockutils [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.674 2 DEBUG oslo_concurrency.lockutils [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "27831bd3-a756-4807-b9da-7be12d549265-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.674 2 DEBUG nova.compute.manager [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] No waiting events found dispatching network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.675 2 WARNING nova.compute.manager [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received unexpected event network-vif-plugged-89605073-2c16-4e83-a34b-96c0ad203677 for instance with vm_state deleted and task_state None.#033[00m
Oct  9 09:56:20 compute-1 nova_compute[162974]: 2025-10-09 09:56:20.675 2 DEBUG nova.compute.manager [req-638a2aec-aadc-4c81-a020-e97d25b39d39 req-eb81dec0-3174-4e7d-a447-c8ed3e81885f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Received event network-vif-deleted-89605073-2c16-4e83-a34b-96c0ad203677 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:20.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:21.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:22 compute-1 podman[166145]: 2025-10-09 09:56:22.533216367 +0000 UTC m=+0.043376791 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 09:56:22 compute-1 podman[166144]: 2025-10-09 09:56:22.554177916 +0000 UTC m=+0.065428757 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 09:56:22 compute-1 nova_compute[162974]: 2025-10-09 09:56:22.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:23 compute-1 nova_compute[162974]: 2025-10-09 09:56:23.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:23.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:24.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:25 compute-1 nova_compute[162974]: 2025-10-09 09:56:25.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:25 compute-1 nova_compute[162974]: 2025-10-09 09:56:25.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:25.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:27 compute-1 nova_compute[162974]: 2025-10-09 09:56:27.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:27.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:28 compute-1 nova_compute[162974]: 2025-10-09 09:56:28.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:28.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:29.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:30.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:31.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:32 compute-1 nova_compute[162974]: 2025-10-09 09:56:32.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:32.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:33 compute-1 nova_compute[162974]: 2025-10-09 09:56:33.457 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760003778.4563627, 27831bd3-a756-4807-b9da-7be12d549265 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:56:33 compute-1 nova_compute[162974]: 2025-10-09 09:56:33.457 2 INFO nova.compute.manager [-] [instance: 27831bd3-a756-4807-b9da-7be12d549265] VM Stopped (Lifecycle Event)#033[00m
Oct  9 09:56:33 compute-1 nova_compute[162974]: 2025-10-09 09:56:33.472 2 DEBUG nova.compute.manager [None req-b441c2c0-af09-4be8-94a1-a785b8f3abda - - - - - -] [instance: 27831bd3-a756-4807-b9da-7be12d549265] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:56:33 compute-1 nova_compute[162974]: 2025-10-09 09:56:33.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:33.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:34 compute-1 podman[166212]: 2025-10-09 09:56:34.568301446 +0000 UTC m=+0.079803405 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  9 09:56:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:34.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:35.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:36.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:37 compute-1 nova_compute[162974]: 2025-10-09 09:56:37.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:37.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:38 compute-1 nova_compute[162974]: 2025-10-09 09:56:38.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:38.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:40.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:41.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.136 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.137 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.147 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.200 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.200 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.204 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.204 2 INFO nova.compute.claims [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.265 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:42 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:56:42 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1320803762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.606 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.610 2 DEBUG nova.compute.provider_tree [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.622 2 DEBUG nova.scheduler.client.report [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.634 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.635 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.664 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.665 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.678 2 INFO nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.690 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 09:56:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:42.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.747 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.748 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.748 2 INFO nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Creating image(s)#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.767 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.785 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.807 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.809 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.854 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.855 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.856 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.856 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.873 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.875 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:42 compute-1 nova_compute[162974]: 2025-10-09 09:56:42.952 2 DEBUG nova.policy [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.016 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.064 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.122 2 DEBUG nova.objects.instance [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.132 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.132 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Ensure instance console log exists: /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.133 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.133 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.133 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:43 compute-1 nova_compute[162974]: 2025-10-09 09:56:43.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:43.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.138 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Successfully created port: 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 09:56:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:44.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.728 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Successfully updated port: 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.740 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.741 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.741 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.794 2 DEBUG nova.compute.manager [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.794 2 DEBUG nova.compute.manager [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing instance network info cache due to event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.794 2 DEBUG oslo_concurrency.lockutils [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:56:44 compute-1 nova_compute[162974]: 2025-10-09 09:56:44.855 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.346 2 DEBUG nova.network.neutron [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.358 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.358 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance network_info: |[{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.358 2 DEBUG oslo_concurrency.lockutils [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.359 2 DEBUG nova.network.neutron [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing network info cache for port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.360 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Start _get_guest_xml network_info=[{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.364 2 WARNING nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.367 2 DEBUG nova.virt.libvirt.host [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.367 2 DEBUG nova.virt.libvirt.host [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.372 2 DEBUG nova.virt.libvirt.host [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.372 2 DEBUG nova.virt.libvirt.host [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.372 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.372 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.373 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.374 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.374 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.374 2 DEBUG nova.virt.hardware [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.376 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:45 compute-1 podman[166449]: 2025-10-09 09:56:45.530274391 +0000 UTC m=+0.044885776 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:56:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:56:45 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1755900932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.716 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.734 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:56:45 compute-1 nova_compute[162974]: 2025-10-09 09:56:45.737 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:45.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:56:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1438079082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.089 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.091 2 DEBUG nova.virt.libvirt.vif [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:56:42Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.091 2 DEBUG nova.network.os_vif_util [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.092 2 DEBUG nova.network.os_vif_util [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.092 2 DEBUG nova.objects.instance [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.107 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] End _get_guest_xml xml=<domain type="kvm">
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <uuid>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</uuid>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <name>instance-00000003</name>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <memory>131072</memory>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <vcpu>1</vcpu>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <metadata>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <nova:name>tempest-TestNetworkBasicOps-server-61543066</nova:name>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <nova:creationTime>2025-10-09 09:56:45</nova:creationTime>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <nova:flavor name="m1.nano">
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <nova:memory>128</nova:memory>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <nova:disk>1</nova:disk>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <nova:swap>0</nova:swap>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <nova:vcpus>1</nova:vcpus>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      </nova:flavor>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <nova:owner>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      </nova:owner>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <nova:ports>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <nova:port uuid="8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f">
Oct  9 09:56:46 compute-1 nova_compute[162974]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        </nova:port>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      </nova:ports>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    </nova:instance>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  </metadata>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <sysinfo type="smbios">
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <system>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <entry name="manufacturer">RDO</entry>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <entry name="product">OpenStack Compute</entry>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <entry name="serial">e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <entry name="uuid">e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <entry name="family">Virtual Machine</entry>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    </system>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  </sysinfo>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <os>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <boot dev="hd"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <smbios mode="sysinfo"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  </os>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <features>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <acpi/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <apic/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <vmcoreinfo/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  </features>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <clock offset="utc">
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <timer name="hpet" present="no"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  </clock>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <cpu mode="host-model" match="exact">
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  </cpu>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  <devices>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <disk type="network" device="disk">
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk">
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      </source>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <target dev="vda" bus="virtio"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <disk type="network" device="cdrom">
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config">
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      </source>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 09:56:46 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <target dev="sda" bus="sata"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <interface type="ethernet">
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <mac address="fa:16:3e:4d:30:c8"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <mtu size="1442"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <target dev="tap8d2d29b3-65"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <serial type="pty">
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <log file="/var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log" append="off"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    </serial>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <video>
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    </video>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <input type="tablet" bus="usb"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <rng model="virtio">
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <backend model="random">/dev/urandom</backend>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    </rng>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <controller type="usb" index="0"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    <memballoon model="virtio">
Oct  9 09:56:46 compute-1 nova_compute[162974]:      <stats period="10"/>
Oct  9 09:56:46 compute-1 nova_compute[162974]:    </memballoon>
Oct  9 09:56:46 compute-1 nova_compute[162974]:  </devices>
Oct  9 09:56:46 compute-1 nova_compute[162974]: </domain>
Oct  9 09:56:46 compute-1 nova_compute[162974]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.108 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Preparing to wait for external event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.108 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.109 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.109 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.109 2 DEBUG nova.virt.libvirt.vif [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:56:42Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.109 2 DEBUG nova.network.os_vif_util [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.110 2 DEBUG nova.network.os_vif_util [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.110 2 DEBUG os_vif [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d2d29b3-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d2d29b3-65, col_values=(('external_ids', {'iface-id': '8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:30:c8', 'vm-uuid': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.1165] manager: (tap8d2d29b3-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.122 2 INFO os_vif [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65')#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.155 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.156 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.156 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:4d:30:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.157 2 INFO nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Using config drive#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.178 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.437 2 INFO nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Creating config drive at /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.442 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo_bzdxk5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.562 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo_bzdxk5" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.588 2 DEBUG nova.storage.rbd_utils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.592 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.687 2 DEBUG oslo_concurrency.processutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.688 2 INFO nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Deleting local config drive /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/disk.config because it was imported into RBD.#033[00m
Oct  9 09:56:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.7354] manager: (tap8d2d29b3-65): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct  9 09:56:46 compute-1 kernel: tap8d2d29b3-65: entered promiscuous mode
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:46 compute-1 ovn_controller[62080]: 2025-10-09T09:56:46Z|00036|binding|INFO|Claiming lport 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f for this chassis.
Oct  9 09:56:46 compute-1 ovn_controller[62080]: 2025-10-09T09:56:46Z|00037|binding|INFO|8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f: Claiming fa:16:3e:4d:30:c8 10.100.0.7
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.749 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:30:c8 10.100.0.7'], port_security=['fa:16:3e:4d:30:c8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26c660ed-37e9-4f44-b603-3901342edf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2fd66aef-c4b5-4f4c-ae18-6ccc210d224e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edfdfe9-a5ca-4224-9930-4324a48b984f, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.750 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f in datapath 26c660ed-37e9-4f44-b603-3901342edf9b bound to our chassis#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.751 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26c660ed-37e9-4f44-b603-3901342edf9b#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.762 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9a597163-bdc2-47b9-84cc-ad34ebfff3a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.762 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26c660ed-31 in ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.765 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26c660ed-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.765 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3a2903-87b2-4992-b43a-f211fc2e3b04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.766 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[621711f3-4b54-42ec-8f88-b8d4d2a30581]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 systemd-machined[120683]: New machine qemu-2-instance-00000003.
Oct  9 09:56:46 compute-1 systemd-udevd[166582]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:56:46 compute-1 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Oct  9 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.7863] device (tap8d2d29b3-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.7868] device (tap8d2d29b3-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.783 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[7602c751-40ff-4982-8623-d7954a7be632]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.809 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[d2c1d771-9c68-4e92-9de9-aa0ec93668d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 ovn_controller[62080]: 2025-10-09T09:56:46Z|00038|binding|INFO|Setting lport 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f ovn-installed in OVS
Oct  9 09:56:46 compute-1 ovn_controller[62080]: 2025-10-09T09:56:46Z|00039|binding|INFO|Setting lport 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f up in Southbound
Oct  9 09:56:46 compute-1 nova_compute[162974]: 2025-10-09 09:56:46.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.849 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[a112979e-7527-4f78-8d88-514bee31d9a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.8541] manager: (tap26c660ed-30): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct  9 09:56:46 compute-1 systemd-udevd[166585]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.853 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[0b627ffc-9055-4dd7-b212-a89a447ee896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.886 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb7f80d-be8a-4ffe-828d-40ba9afd15b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.888 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[46cdf40f-5898-4915-ae4c-54a6cc5c4fde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 NetworkManager[982]: <info>  [1760003806.9110] device (tap26c660ed-30): carrier: link connected
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.915 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[b7792502-ab2f-4430-bc96-040a99a0a0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.929 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[21724580-5228-4ec9-a478-cd2b1ce5d993]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26c660ed-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 149048, 'reachable_time': 32133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 166606, 'error': None, 'target': 'ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.953 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[aad70e92-bc5d-434a-a6c8-38d5910e2b86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:1ca4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 149048, 'tstamp': 149048}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 166607, 'error': None, 'target': 'ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:46.977 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[62a532a4-811d-452e-aa95-7e20eee588ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26c660ed-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:1c:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 149048, 'reachable_time': 32133, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 166608, 'error': None, 'target': 'ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.012 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[df144f35-cebd-4459-8e12-f975393754ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.035 2 DEBUG nova.compute.manager [req-6761047f-1ad9-4d58-b15e-2aa235ed9a3d req-267c63be-1415-417c-a90f-60734ef80d5a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.035 2 DEBUG oslo_concurrency.lockutils [req-6761047f-1ad9-4d58-b15e-2aa235ed9a3d req-267c63be-1415-417c-a90f-60734ef80d5a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.035 2 DEBUG oslo_concurrency.lockutils [req-6761047f-1ad9-4d58-b15e-2aa235ed9a3d req-267c63be-1415-417c-a90f-60734ef80d5a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.035 2 DEBUG oslo_concurrency.lockutils [req-6761047f-1ad9-4d58-b15e-2aa235ed9a3d req-267c63be-1415-417c-a90f-60734ef80d5a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.036 2 DEBUG nova.compute.manager [req-6761047f-1ad9-4d58-b15e-2aa235ed9a3d req-267c63be-1415-417c-a90f-60734ef80d5a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Processing event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.055 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ef2391-7b9a-4a9a-a3a8-19a9df0ebb58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.056 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26c660ed-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.056 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.057 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26c660ed-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:47 compute-1 kernel: tap26c660ed-30: entered promiscuous mode
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.058 2 DEBUG nova.network.neutron [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updated VIF entry in instance network info cache for port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.060 2 DEBUG nova.network.neutron [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:56:47 compute-1 NetworkManager[982]: <info>  [1760003807.0602] manager: (tap26c660ed-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.062 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26c660ed-30, col_values=(('external_ids', {'iface-id': '57354100-1abc-4399-a76b-c42eaec1ad73'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:47 compute-1 ovn_controller[62080]: 2025-10-09T09:56:47Z|00040|binding|INFO|Releasing lport 57354100-1abc-4399-a76b-c42eaec1ad73 from this chassis (sb_readonly=0)
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.065 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26c660ed-37e9-4f44-b603-3901342edf9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26c660ed-37e9-4f44-b603-3901342edf9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.066 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6816ea77-7d60-425d-9cde-290f72da4e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.067 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: global
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    log         /dev/log local0 debug
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    log-tag     haproxy-metadata-proxy-26c660ed-37e9-4f44-b603-3901342edf9b
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    user        root
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    group       root
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    maxconn     1024
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    pidfile     /var/lib/neutron/external/pids/26c660ed-37e9-4f44-b603-3901342edf9b.pid.haproxy
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    daemon
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: defaults
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    log global
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    mode http
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    option httplog
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    option dontlognull
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    option http-server-close
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    option forwardfor
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    retries                 3
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    timeout http-request    30s
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    timeout connect         30s
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    timeout client          32s
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    timeout server          32s
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    timeout http-keep-alive 30s
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: listen listener
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    bind 169.254.169.254:80
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]:    http-request add-header X-OVN-Network-ID 26c660ed-37e9-4f44-b603-3901342edf9b
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 09:56:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:56:47.069 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b', 'env', 'PROCESS_TAG=haproxy-26c660ed-37e9-4f44-b603-3901342edf9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26c660ed-37e9-4f44-b603-3901342edf9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.072 2 DEBUG oslo_concurrency.lockutils [req-f91278ac-f3cb-4c91-bfcc-3e08b837cbf9 req-e8437694-f9bd-44eb-b070-c83785ece4da b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:47 compute-1 podman[166679]: 2025-10-09 09:56:47.449828363 +0000 UTC m=+0.036846626 container create e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct  9 09:56:47 compute-1 systemd[1]: Started libpod-conmon-e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c.scope.
Oct  9 09:56:47 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:56:47 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a45d62be4bd9ce9df4641fb90075d2091d46c2e93ad8f4010bfee1112d2e50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.518 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.520 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003807.517912, e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.521 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] VM Started (Lifecycle Event)#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.524 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 09:56:47 compute-1 podman[166679]: 2025-10-09 09:56:47.526267311 +0000 UTC m=+0.113285584 container init e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.530 2 INFO nova.virt.libvirt.driver [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance spawned successfully.#033[00m
Oct  9 09:56:47 compute-1 podman[166679]: 2025-10-09 09:56:47.434183329 +0000 UTC m=+0.021201602 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.531 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 09:56:47 compute-1 podman[166679]: 2025-10-09 09:56:47.533790568 +0000 UTC m=+0.120808821 container start e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.536 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.543 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.547 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.548 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.548 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.549 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.549 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.549 2 DEBUG nova.virt.libvirt.driver [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:56:47 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [NOTICE]   (166695) : New worker (166697) forked
Oct  9 09:56:47 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [NOTICE]   (166695) : Loading success.
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.556 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.556 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003807.5201333, e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.557 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] VM Paused (Lifecycle Event)#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.568 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.570 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003807.523251, e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.570 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] VM Resumed (Lifecycle Event)#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.592 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.594 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.611 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.618 2 INFO nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Took 4.87 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.619 2 DEBUG nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.657 2 INFO nova.compute.manager [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Took 5.48 seconds to build instance.#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.669 2 DEBUG oslo_concurrency.lockutils [None req-2660e3a3-377b-48ae-9b5c-74b1b3359a71 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:47 compute-1 nova_compute[162974]: 2025-10-09 09:56:47.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:47.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:48 compute-1 nova_compute[162974]: 2025-10-09 09:56:48.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:48.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.091 2 DEBUG nova.compute.manager [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.092 2 DEBUG oslo_concurrency.lockutils [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.092 2 DEBUG oslo_concurrency.lockutils [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.092 2 DEBUG oslo_concurrency.lockutils [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.092 2 DEBUG nova.compute.manager [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.093 2 WARNING nova.compute.manager [req-d52c374a-27ef-4fe4-8d23-d187e80e0aff req-40bf7869-3ebb-4f25-a81e-62f38ae0fc8c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f for instance with vm_state active and task_state None.#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.131 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.131 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.131 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.479 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.527 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.527 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.776 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.777 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4955MB free_disk=59.967525482177734GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.778 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.778 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.828 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.829 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.829 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:56:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:49.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:49 compute-1 nova_compute[162974]: 2025-10-09 09:56:49.885 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:50 compute-1 ovn_controller[62080]: 2025-10-09T09:56:50Z|00041|binding|INFO|Releasing lport 57354100-1abc-4399-a76b-c42eaec1ad73 from this chassis (sb_readonly=0)
Oct  9 09:56:50 compute-1 NetworkManager[982]: <info>  [1760003810.2231] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct  9 09:56:50 compute-1 NetworkManager[982]: <info>  [1760003810.2238] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:50 compute-1 ovn_controller[62080]: 2025-10-09T09:56:50Z|00042|binding|INFO|Releasing lport 57354100-1abc-4399-a76b-c42eaec1ad73 from this chassis (sb_readonly=0)
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.266 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.270 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.279 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.292 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.293 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.437 2 DEBUG nova.compute.manager [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.437 2 DEBUG nova.compute.manager [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing instance network info cache due to event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.438 2 DEBUG oslo_concurrency.lockutils [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.438 2 DEBUG oslo_concurrency.lockutils [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:56:50 compute-1 nova_compute[162974]: 2025-10-09 09:56:50.438 2 DEBUG nova.network.neutron [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing network info cache for port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:56:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:50.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.293 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.294 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.294 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.294 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.294 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.295 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.431 2 DEBUG nova.network.neutron [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updated VIF entry in instance network info cache for port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.432 2 DEBUG nova.network.neutron [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:56:51 compute-1 nova_compute[162974]: 2025-10-09 09:56:51.445 2 DEBUG oslo_concurrency.lockutils [req-d9329951-e27a-4689-851d-964fa112aed6 req-b3648ea4-0ab6-4cdb-8dd4-5ec1879a0770 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:56:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:51.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:56:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:52.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.922 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.923 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.923 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  9 09:56:52 compute-1 nova_compute[162974]: 2025-10-09 09:56:52.923 2 DEBUG nova.objects.instance [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:56:53 compute-1 podman[166776]: 2025-10-09 09:56:53.531597239 +0000 UTC m=+0.042017267 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:56:53 compute-1 podman[166777]: 2025-10-09 09:56:53.539242327 +0000 UTC m=+0.049662343 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:56:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:54.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:54 compute-1 nova_compute[162974]: 2025-10-09 09:56:54.835 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:56:54 compute-1 nova_compute[162974]: 2025-10-09 09:56:54.846 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:56:54 compute-1 nova_compute[162974]: 2025-10-09 09:56:54.847 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  9 09:56:54 compute-1 nova_compute[162974]: 2025-10-09 09:56:54.847 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:56 compute-1 nova_compute[162974]: 2025-10-09 09:56:56.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:56.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:57 compute-1 nova_compute[162974]: 2025-10-09 09:56:57.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:56:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:57.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:58.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:58 compute-1 ovn_controller[62080]: 2025-10-09T09:56:58Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:30:c8 10.100.0.7
Oct  9 09:56:58 compute-1 ovn_controller[62080]: 2025-10-09T09:56:58Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:30:c8 10.100.0.7
Oct  9 09:56:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:56:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:59.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:00.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:01 compute-1 nova_compute[162974]: 2025-10-09 09:57:01.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:01.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:02 compute-1 nova_compute[162974]: 2025-10-09 09:57:02.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:02.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:03.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:04 compute-1 nova_compute[162974]: 2025-10-09 09:57:04.738 2 INFO nova.compute.manager [None req-1a299b68-b3a0-42cf-8d1a-4e75df6c02d4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Get console output#033[00m
Oct  9 09:57:04 compute-1 nova_compute[162974]: 2025-10-09 09:57:04.741 2 INFO oslo.privsep.daemon [None req-1a299b68-b3a0-42cf-8d1a-4e75df6c02d4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpypcx92vx/privsep.sock']#033[00m
Oct  9 09:57:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:04.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.278 2 INFO oslo.privsep.daemon [None req-1a299b68-b3a0-42cf-8d1a-4e75df6c02d4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.194 1023 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.198 1023 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.199 1023 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  9 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.199 1023 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1023#033[00m
Oct  9 09:57:05 compute-1 nova_compute[162974]: 2025-10-09 09:57:05.353 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 09:57:05 compute-1 podman[166823]: 2025-10-09 09:57:05.54736587 +0000 UTC m=+0.058534425 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  9 09:57:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:05.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:06 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:06.085 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:57:06 compute-1 nova_compute[162974]: 2025-10-09 09:57:06.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:06 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:06.088 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:57:06 compute-1 nova_compute[162974]: 2025-10-09 09:57:06.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:06.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:06 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:57:07 compute-1 nova_compute[162974]: 2025-10-09 09:57:07.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:07.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.041 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.041 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.042 2 DEBUG nova.objects.instance [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'flavor' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.300 2 DEBUG nova.objects.instance [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_requests' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.316 2 DEBUG nova.network.neutron [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.441 2 DEBUG nova.policy [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 09:57:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:08 compute-1 nova_compute[162974]: 2025-10-09 09:57:08.776 2 DEBUG nova.network.neutron [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Successfully created port: 73007432-5bb0-435a-a871-05f59846a277 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.278 2 DEBUG nova.network.neutron [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Successfully updated port: 73007432-5bb0-435a-a871-05f59846a277 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.290 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.290 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.290 2 DEBUG nova.network.neutron [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.341 2 DEBUG nova.compute.manager [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-changed-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.341 2 DEBUG nova.compute.manager [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing instance network info cache due to event network-changed-73007432-5bb0-435a-a871-05f59846a277. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:57:09 compute-1 nova_compute[162974]: 2025-10-09 09:57:09.341 2 DEBUG oslo_concurrency.lockutils [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:57:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:09.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:10.035 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:10.036 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:10.036 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:10.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:11 compute-1 nova_compute[162974]: 2025-10-09 09:57:11.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:11.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:12 compute-1 nova_compute[162974]: 2025-10-09 09:57:12.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:12.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.090 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.112 2 DEBUG nova.network.neutron [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.125 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.125 2 DEBUG oslo_concurrency.lockutils [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.126 2 DEBUG nova.network.neutron [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing network info cache for port 73007432-5bb0-435a-a871-05f59846a277 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.128 2 DEBUG nova.virt.libvirt.vif [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:56:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:56:47Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.129 2 DEBUG nova.network.os_vif_util [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.130 2 DEBUG nova.network.os_vif_util [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.131 2 DEBUG os_vif [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73007432-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73007432-5b, col_values=(('external_ids', {'iface-id': '73007432-5bb0-435a-a871-05f59846a277', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:96:4c', 'vm-uuid': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.1419] manager: (tap73007432-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.148 2 INFO os_vif [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b')#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.149 2 DEBUG nova.virt.libvirt.vif [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:56:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:56:47Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.149 2 DEBUG nova.network.os_vif_util [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.150 2 DEBUG nova.network.os_vif_util [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.152 2 DEBUG nova.virt.libvirt.guest [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] attach device xml: <interface type="ethernet">
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <mac address="fa:16:3e:78:96:4c"/>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <model type="virtio"/>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <mtu size="1442"/>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <target dev="tap73007432-5b"/>
Oct  9 09:57:13 compute-1 nova_compute[162974]: </interface>
Oct  9 09:57:13 compute-1 nova_compute[162974]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  9 09:57:13 compute-1 kernel: tap73007432-5b: entered promiscuous mode
Oct  9 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.1625] manager: (tap73007432-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct  9 09:57:13 compute-1 ovn_controller[62080]: 2025-10-09T09:57:13Z|00043|binding|INFO|Claiming lport 73007432-5bb0-435a-a871-05f59846a277 for this chassis.
Oct  9 09:57:13 compute-1 ovn_controller[62080]: 2025-10-09T09:57:13Z|00044|binding|INFO|73007432-5bb0-435a-a871-05f59846a277: Claiming fa:16:3e:78:96:4c 10.100.0.19
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.171 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:96:4c 10.100.0.19'], port_security=['fa:16:3e:78:96:4c 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c62a73d-d0d8-493b-b929-9ae564924767', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e91cefc-5914-40f8-95c0-e51a38aae1ba, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=73007432-5bb0-435a-a871-05f59846a277) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.172 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 73007432-5bb0-435a-a871-05f59846a277 in datapath 3c62a73d-d0d8-493b-b929-9ae564924767 bound to our chassis#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.173 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c62a73d-d0d8-493b-b929-9ae564924767#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.182 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f6a882-5d33-4805-a625-55c59173d295]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.183 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c62a73d-d1 in ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.184 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c62a73d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.184 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[11da73fc-63d4-4e80-8019-1399bfe11457]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.185 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[d14dfd5d-3525-433e-bf42-c550c993410c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 systemd-udevd[166987]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.201 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[c106a9f5-b205-4dd4-a279-cd08d5bbfbe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 ovn_controller[62080]: 2025-10-09T09:57:13Z|00045|binding|INFO|Setting lport 73007432-5bb0-435a-a871-05f59846a277 ovn-installed in OVS
Oct  9 09:57:13 compute-1 ovn_controller[62080]: 2025-10-09T09:57:13Z|00046|binding|INFO|Setting lport 73007432-5bb0-435a-a871-05f59846a277 up in Southbound
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.2181] device (tap73007432-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.2190] device (tap73007432-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.228 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[4156f9f4-a9cb-4ca8-bebc-1e9b93c764af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.247 2 DEBUG nova.virt.libvirt.driver [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.247 2 DEBUG nova.virt.libvirt.driver [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.247 2 DEBUG nova.virt.libvirt.driver [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:4d:30:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.248 2 DEBUG nova.virt.libvirt.driver [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:78:96:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.259 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[74e8684c-2a0a-4b9d-9c24-5659db6b9f0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.262 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c0305df5-ff9f-4365-9774-312b7e19f498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.2632] manager: (tap3c62a73d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.266 2 DEBUG nova.virt.libvirt.guest [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <nova:name>tempest-TestNetworkBasicOps-server-61543066</nova:name>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <nova:creationTime>2025-10-09 09:57:13</nova:creationTime>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <nova:flavor name="m1.nano">
Oct  9 09:57:13 compute-1 nova_compute[162974]:    <nova:memory>128</nova:memory>
Oct  9 09:57:13 compute-1 nova_compute[162974]:    <nova:disk>1</nova:disk>
Oct  9 09:57:13 compute-1 nova_compute[162974]:    <nova:swap>0</nova:swap>
Oct  9 09:57:13 compute-1 nova_compute[162974]:    <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:57:13 compute-1 nova_compute[162974]:    <nova:vcpus>1</nova:vcpus>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  </nova:flavor>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <nova:owner>
Oct  9 09:57:13 compute-1 nova_compute[162974]:    <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:57:13 compute-1 nova_compute[162974]:    <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  </nova:owner>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  <nova:ports>
Oct  9 09:57:13 compute-1 nova_compute[162974]:    <nova:port uuid="8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f">
Oct  9 09:57:13 compute-1 nova_compute[162974]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  9 09:57:13 compute-1 nova_compute[162974]:    </nova:port>
Oct  9 09:57:13 compute-1 nova_compute[162974]:    <nova:port uuid="73007432-5bb0-435a-a871-05f59846a277">
Oct  9 09:57:13 compute-1 nova_compute[162974]:      <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct  9 09:57:13 compute-1 nova_compute[162974]:    </nova:port>
Oct  9 09:57:13 compute-1 nova_compute[162974]:  </nova:ports>
Oct  9 09:57:13 compute-1 nova_compute[162974]: </nova:instance>
Oct  9 09:57:13 compute-1 nova_compute[162974]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.283 2 DEBUG oslo_concurrency.lockutils [None req-0935b534-d34f-44c7-b655-2aa58815c9b0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.292 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed522c2-36a0-4360-a051-39f90f4a642b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.293 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[12a18549-4203-4a01-acfa-8254ea70ed37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.3159] device (tap3c62a73d-d0): carrier: link connected
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.322 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1f87ce-f4f9-492a-bc6e-c5b2fb49a0bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.336 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa4f86b-1ac6-4e66-a25c-1cb362f23b7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c62a73d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:39:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 151688, 'reachable_time': 36095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 167004, 'error': None, 'target': 'ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.351 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[842d2586-fe4d-49ed-ab68-ef63010dafac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:3975'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 151688, 'tstamp': 151688}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 167005, 'error': None, 'target': 'ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.365 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c5dbea62-1d0e-4ca1-9612-0905728299c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c62a73d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:39:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 151688, 'reachable_time': 36095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 167006, 'error': None, 'target': 'ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.395 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[e07d97df-e626-4580-811d-000fafa2ec04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.453 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[e385a408-28ca-45e8-a319-c705b4ec397e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.454 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c62a73d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.454 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.454 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c62a73d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 NetworkManager[982]: <info>  [1760003833.4568] manager: (tap3c62a73d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct  9 09:57:13 compute-1 kernel: tap3c62a73d-d0: entered promiscuous mode
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.460 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c62a73d-d0, col_values=(('external_ids', {'iface-id': 'b0e7930e-d821-45eb-a309-d4e5e2c7e0f3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 ovn_controller[62080]: 2025-10-09T09:57:13Z|00047|binding|INFO|Releasing lport b0e7930e-d821-45eb-a309-d4e5e2c7e0f3 from this chassis (sb_readonly=0)
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.475 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c62a73d-d0d8-493b-b929-9ae564924767.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c62a73d-d0d8-493b-b929-9ae564924767.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 09:57:13 compute-1 nova_compute[162974]: 2025-10-09 09:57:13.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.476 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[2684d2fa-b981-44ba-ae61-3d77e570824a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.476 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: global
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    log         /dev/log local0 debug
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    log-tag     haproxy-metadata-proxy-3c62a73d-d0d8-493b-b929-9ae564924767
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    user        root
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    group       root
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    maxconn     1024
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    pidfile     /var/lib/neutron/external/pids/3c62a73d-d0d8-493b-b929-9ae564924767.pid.haproxy
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    daemon
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: defaults
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    log global
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    mode http
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    option httplog
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    option dontlognull
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    option http-server-close
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    option forwardfor
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    retries                 3
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    timeout http-request    30s
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    timeout connect         30s
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    timeout client          32s
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    timeout server          32s
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    timeout http-keep-alive 30s
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: listen listener
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    bind 169.254.169.254:80
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]:    http-request add-header X-OVN-Network-ID 3c62a73d-d0d8-493b-b929-9ae564924767
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 09:57:13 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:13.477 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767', 'env', 'PROCESS_TAG=haproxy-3c62a73d-d0d8-493b-b929-9ae564924767', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c62a73d-d0d8-493b-b929-9ae564924767.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 09:57:13 compute-1 podman[167036]: 2025-10-09 09:57:13.775274883 +0000 UTC m=+0.042232844 container create bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  9 09:57:13 compute-1 systemd[1]: Started libpod-conmon-bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8.scope.
Oct  9 09:57:13 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:57:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d101d488314606cd0605ba833244519bc1b143a95501ac83d26953afcdb780b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:13 compute-1 podman[167036]: 2025-10-09 09:57:13.83640896 +0000 UTC m=+0.103366920 container init bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:57:13 compute-1 podman[167036]: 2025-10-09 09:57:13.842270624 +0000 UTC m=+0.109228584 container start bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  9 09:57:13 compute-1 podman[167036]: 2025-10-09 09:57:13.760293041 +0000 UTC m=+0.027251000 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:57:13 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [NOTICE]   (167052) : New worker (167054) forked
Oct  9 09:57:13 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [NOTICE]   (167052) : Loading success.
Oct  9 09:57:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:13.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.257 2 DEBUG nova.compute.manager [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.257 2 DEBUG oslo_concurrency.lockutils [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.258 2 DEBUG oslo_concurrency.lockutils [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.258 2 DEBUG oslo_concurrency.lockutils [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.258 2 DEBUG nova.compute.manager [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:57:14 compute-1 nova_compute[162974]: 2025-10-09 09:57:14.258 2 WARNING nova.compute.manager [req-fd68a8dc-770d-4f15-8f75-c2b67ea505fc req-6777918a-22a6-4acf-ac58-ba07499b9d9d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 for instance with vm_state active and task_state None.#033[00m
Oct  9 09:57:14 compute-1 ovn_controller[62080]: 2025-10-09T09:57:14Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:96:4c 10.100.0.19
Oct  9 09:57:14 compute-1 ovn_controller[62080]: 2025-10-09T09:57:14Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:96:4c 10.100.0.19
Oct  9 09:57:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:14.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.156 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-73007432-5bb0-435a-a871-05f59846a277" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.156 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-73007432-5bb0-435a-a871-05f59846a277" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.171 2 DEBUG nova.objects.instance [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'flavor' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.184 2 DEBUG nova.virt.libvirt.vif [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:56:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:56:47Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.185 2 DEBUG nova.network.os_vif_util [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.185 2 DEBUG nova.network.os_vif_util [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.188 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.190 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.192 2 DEBUG nova.virt.libvirt.driver [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Attempting to detach device tap73007432-5b from instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.192 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] detach device xml: <interface type="ethernet">
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <mac address="fa:16:3e:78:96:4c"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <model type="virtio"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <mtu size="1442"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <target dev="tap73007432-5b"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]: </interface>
Oct  9 09:57:15 compute-1 nova_compute[162974]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.196 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.199 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <name>instance-00000003</name>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <uuid>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</uuid>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <metadata>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:name>tempest-TestNetworkBasicOps-server-61543066</nova:name>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:creationTime>2025-10-09 09:57:13</nova:creationTime>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:flavor name="m1.nano">
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:memory>128</nova:memory>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:disk>1</nova:disk>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:swap>0</nova:swap>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:vcpus>1</nova:vcpus>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </nova:flavor>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:owner>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </nova:owner>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:ports>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:port uuid="8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f">
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </nova:port>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:port uuid="73007432-5bb0-435a-a871-05f59846a277">
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </nova:port>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </nova:ports>
Oct  9 09:57:15 compute-1 nova_compute[162974]: </nova:instance>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </metadata>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <memory unit='KiB'>131072</memory>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <vcpu placement='static'>1</vcpu>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <resource>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <partition>/machine</partition>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </resource>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <sysinfo type='smbios'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <system>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='manufacturer'>RDO</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='product'>OpenStack Compute</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='serial'>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='uuid'>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='family'>Virtual Machine</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </system>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </sysinfo>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <os>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <boot dev='hd'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <smbios mode='sysinfo'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </os>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <features>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <acpi/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <apic/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <vmcoreinfo state='on'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </features>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <cpu mode='custom' match='exact' check='full'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <vendor>AMD</vendor>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='x2apic'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='tsc-deadline'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='hypervisor'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='tsc_adjust'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='vaes'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='spec-ctrl'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='stibp'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='arch-capabilities'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='ssbd'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='cmp_legacy'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='overflow-recov'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='succor'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='virt-ssbd'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='lbrv'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='tsc-scale'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='vmcb-clean'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='flushbyasid'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='pause-filter'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='pfthreshold'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='v-vmsave-vmload'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='vgif'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='rdctl-no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='mds-no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='gds-no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='rfds-no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='svm'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='topoext'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='npt'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='nrip-save'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </cpu>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <clock offset='utc'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <timer name='pit' tickpolicy='delay'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <timer name='hpet' present='no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </clock>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <on_poweroff>destroy</on_poweroff>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <on_reboot>restart</on_reboot>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <on_crash>destroy</on_crash>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <devices>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <disk type='network' device='disk'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <driver name='qemu' type='raw' cache='none'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <auth username='openstack'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <source protocol='rbd' name='vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk' index='2'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.100' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.102' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.101' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      </source>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target dev='vda' bus='virtio'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='virtio-disk0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <disk type='network' device='cdrom'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <driver name='qemu' type='raw' cache='none'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <auth username='openstack'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <source protocol='rbd' name='vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config' index='1'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.100' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.102' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.101' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      </source>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target dev='sda' bus='sata'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <readonly/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='sata0-0-0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='0' model='pcie-root'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pcie.0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='1' port='0x10'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='2' port='0x11'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='3' port='0x12'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.3'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='4' port='0x13'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.4'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='5' port='0x14'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.5'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='6' port='0x15'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.6'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='7' port='0x16'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.7'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='8' port='0x17'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.8'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='9' port='0x18'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.9'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='10' port='0x19'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.10'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='11' port='0x1a'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.11'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='12' port='0x1b'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.12'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='13' port='0x1c'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.13'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='14' port='0x1d'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.14'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='15' port='0x1e'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.15'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='16' port='0x1f'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.16'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='17' port='0x20'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.17'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='18' port='0x21'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.18'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='19' port='0x22'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.19'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='20' port='0x23'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.20'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='21' port='0x24'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.21'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='22' port='0x25'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.22'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='23' port='0x26'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.23'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='24' port='0x27'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.24'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='25' port='0x28'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.25'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-pci-bridge'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.26'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='usb'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='sata' index='0'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='ide'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <interface type='ethernet'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <mac address='fa:16:3e:4d:30:c8'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target dev='tap8d2d29b3-65'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model type='virtio'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <driver name='vhost' rx_queue_size='512'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <mtu size='1442'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='net0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <interface type='ethernet'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <mac address='fa:16:3e:78:96:4c'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target dev='tap73007432-5b'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model type='virtio'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <driver name='vhost' rx_queue_size='512'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <mtu size='1442'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='net1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <serial type='pty'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <source path='/dev/pts/0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <log file='/var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log' append='off'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target type='isa-serial' port='0'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <model name='isa-serial'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      </target>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='serial0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </serial>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <console type='pty' tty='/dev/pts/0'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <source path='/dev/pts/0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <log file='/var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log' append='off'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target type='serial' port='0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='serial0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </console>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <input type='tablet' bus='usb'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='input0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='usb' bus='0' port='1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </input>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <input type='mouse' bus='ps2'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='input1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </input>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <input type='keyboard' bus='ps2'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='input2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </input>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <listen type='address' address='::0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </graphics>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <audio id='1' type='none'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <video>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model type='virtio' heads='1' primary='yes'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='video0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </video>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <watchdog model='itco' action='reset'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='watchdog0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </watchdog>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <memballoon model='virtio'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <stats period='10'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='balloon0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </memballoon>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <rng model='virtio'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <backend model='random'>/dev/urandom</backend>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='rng0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </rng>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </devices>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <label>system_u:system_r:svirt_t:s0:c214,c252</label>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c214,c252</imagelabel>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </seclabel>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <label>+107:+107</label>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <imagelabel>+107:+107</imagelabel>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </seclabel>
Oct  9 09:57:15 compute-1 nova_compute[162974]: </domain>
Oct  9 09:57:15 compute-1 nova_compute[162974]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.199 2 INFO nova.virt.libvirt.driver [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully detached device tap73007432-5b from instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 from the persistent domain config.#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.199 2 DEBUG nova.virt.libvirt.driver [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] (1/8): Attempting to detach device tap73007432-5b with device alias net1 from instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.199 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] detach device xml: <interface type="ethernet">
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <mac address="fa:16:3e:78:96:4c"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <model type="virtio"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <mtu size="1442"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <target dev="tap73007432-5b"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]: </interface>
Oct  9 09:57:15 compute-1 nova_compute[162974]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.219 2 DEBUG nova.network.neutron [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updated VIF entry in instance network info cache for port 73007432-5bb0-435a-a871-05f59846a277. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.220 2 DEBUG nova.network.neutron [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.229 2 DEBUG oslo_concurrency.lockutils [req-d4d43015-2fc5-4010-8701-7990c3a6ff69 req-2cfb32a8-0ba0-48f3-bafa-f0cfbeb027b9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:57:15 compute-1 kernel: tap73007432-5b (unregistering): left promiscuous mode
Oct  9 09:57:15 compute-1 NetworkManager[982]: <info>  [1760003835.2932] device (tap73007432-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:57:15 compute-1 ovn_controller[62080]: 2025-10-09T09:57:15Z|00048|binding|INFO|Releasing lport 73007432-5bb0-435a-a871-05f59846a277 from this chassis (sb_readonly=0)
Oct  9 09:57:15 compute-1 ovn_controller[62080]: 2025-10-09T09:57:15Z|00049|binding|INFO|Setting lport 73007432-5bb0-435a-a871-05f59846a277 down in Southbound
Oct  9 09:57:15 compute-1 ovn_controller[62080]: 2025-10-09T09:57:15Z|00050|binding|INFO|Removing iface tap73007432-5b ovn-installed in OVS
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.306 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:96:4c 10.100.0.19'], port_security=['fa:16:3e:78:96:4c 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c62a73d-d0d8-493b-b929-9ae564924767', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e91cefc-5914-40f8-95c0-e51a38aae1ba, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=73007432-5bb0-435a-a871-05f59846a277) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.307 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 73007432-5bb0-435a-a871-05f59846a277 in datapath 3c62a73d-d0d8-493b-b929-9ae564924767 unbound from our chassis#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.308 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c62a73d-d0d8-493b-b929-9ae564924767, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.309 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[db7d1b18-c53e-4e32-bcf4-b32c0b74c8e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.309 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767 namespace which is not needed anymore#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.309 2 DEBUG nova.virt.libvirt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Received event <DeviceRemovedEvent: 1760003835.3093555, e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.312 2 DEBUG nova.virt.libvirt.driver [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Start waiting for the detach event from libvirt for device tap73007432-5b with device alias net1 for instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.312 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.315 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:78:96:4c"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap73007432-5b"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <name>instance-00000003</name>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <uuid>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</uuid>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <metadata>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:name>tempest-TestNetworkBasicOps-server-61543066</nova:name>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:creationTime>2025-10-09 09:57:13</nova:creationTime>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:flavor name="m1.nano">
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:memory>128</nova:memory>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:disk>1</nova:disk>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:swap>0</nova:swap>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:vcpus>1</nova:vcpus>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </nova:flavor>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:owner>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </nova:owner>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:ports>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:port uuid="8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f">
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </nova:port>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:port uuid="73007432-5bb0-435a-a871-05f59846a277">
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </nova:port>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </nova:ports>
Oct  9 09:57:15 compute-1 nova_compute[162974]: </nova:instance>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </metadata>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <memory unit='KiB'>131072</memory>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <vcpu placement='static'>1</vcpu>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <resource>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <partition>/machine</partition>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </resource>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <sysinfo type='smbios'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <system>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='manufacturer'>RDO</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='product'>OpenStack Compute</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='serial'>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='uuid'>e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <entry name='family'>Virtual Machine</entry>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </system>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </sysinfo>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <os>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <boot dev='hd'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <smbios mode='sysinfo'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </os>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <features>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <acpi/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <apic/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <vmcoreinfo state='on'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </features>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <cpu mode='custom' match='exact' check='full'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <vendor>AMD</vendor>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='x2apic'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='tsc-deadline'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='hypervisor'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='tsc_adjust'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='vaes'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='spec-ctrl'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='stibp'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='arch-capabilities'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='ssbd'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='cmp_legacy'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='overflow-recov'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='succor'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='virt-ssbd'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='lbrv'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='tsc-scale'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='vmcb-clean'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='flushbyasid'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='pause-filter'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='pfthreshold'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='v-vmsave-vmload'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='vgif'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='rdctl-no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='mds-no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='gds-no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='rfds-no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='svm'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='require' name='topoext'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='npt'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='nrip-save'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </cpu>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <clock offset='utc'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <timer name='pit' tickpolicy='delay'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <timer name='hpet' present='no'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </clock>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <on_poweroff>destroy</on_poweroff>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <on_reboot>restart</on_reboot>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <on_crash>destroy</on_crash>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <devices>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <disk type='network' device='disk'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <driver name='qemu' type='raw' cache='none'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <auth username='openstack'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <source protocol='rbd' name='vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk' index='2'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.100' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.102' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.101' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      </source>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target dev='vda' bus='virtio'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='virtio-disk0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <disk type='network' device='cdrom'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <driver name='qemu' type='raw' cache='none'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <auth username='openstack'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <source protocol='rbd' name='vms/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_disk.config' index='1'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.100' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.102' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <host name='192.168.122.101' port='6789'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      </source>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target dev='sda' bus='sata'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <readonly/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='sata0-0-0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='0' model='pcie-root'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pcie.0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='1' port='0x10'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='2' port='0x11'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='3' port='0x12'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.3'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='4' port='0x13'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.4'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='5' port='0x14'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.5'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='6' port='0x15'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.6'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='7' port='0x16'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.7'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='8' port='0x17'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.8'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='9' port='0x18'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.9'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='10' port='0x19'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.10'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='11' port='0x1a'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.11'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='12' port='0x1b'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.12'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='13' port='0x1c'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.13'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='14' port='0x1d'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.14'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='15' port='0x1e'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.15'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='16' port='0x1f'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.16'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='17' port='0x20'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.17'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='18' port='0x21'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.18'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='19' port='0x22'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.19'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='20' port='0x23'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.20'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='21' port='0x24'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.21'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='22' port='0x25'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.22'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='23' port='0x26'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.23'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='24' port='0x27'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.24'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-root-port'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target chassis='25' port='0x28'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.25'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model name='pcie-pci-bridge'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='pci.26'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='usb'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <controller type='sata' index='0'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='ide'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </controller>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <interface type='ethernet'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <mac address='fa:16:3e:4d:30:c8'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target dev='tap8d2d29b3-65'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model type='virtio'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <driver name='vhost' rx_queue_size='512'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <mtu size='1442'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='net0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <serial type='pty'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <source path='/dev/pts/0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <log file='/var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log' append='off'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target type='isa-serial' port='0'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:        <model name='isa-serial'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      </target>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='serial0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </serial>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <console type='pty' tty='/dev/pts/0'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <source path='/dev/pts/0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <log file='/var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01/console.log' append='off'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <target type='serial' port='0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='serial0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </console>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <input type='tablet' bus='usb'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='input0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='usb' bus='0' port='1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </input>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <input type='mouse' bus='ps2'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='input1'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </input>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <input type='keyboard' bus='ps2'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='input2'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </input>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <listen type='address' address='::0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </graphics>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <audio id='1' type='none'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <video>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <model type='virtio' heads='1' primary='yes'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='video0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </video>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <watchdog model='itco' action='reset'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='watchdog0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </watchdog>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <memballoon model='virtio'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <stats period='10'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='balloon0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </memballoon>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <rng model='virtio'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <backend model='random'>/dev/urandom</backend>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <alias name='rng0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </rng>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </devices>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <label>system_u:system_r:svirt_t:s0:c214,c252</label>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c214,c252</imagelabel>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </seclabel>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <label>+107:+107</label>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <imagelabel>+107:+107</imagelabel>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </seclabel>
Oct  9 09:57:15 compute-1 nova_compute[162974]: </domain>
Oct  9 09:57:15 compute-1 nova_compute[162974]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.317 2 INFO nova.virt.libvirt.driver [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully detached device tap73007432-5b from instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 from the live domain config.#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.317 2 DEBUG nova.virt.libvirt.vif [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:56:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:56:47Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.318 2 DEBUG nova.network.os_vif_util [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "73007432-5bb0-435a-a871-05f59846a277", "address": "fa:16:3e:78:96:4c", "network": {"id": "3c62a73d-d0d8-493b-b929-9ae564924767", "bridge": "br-int", "label": "tempest-network-smoke--1483077116", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73007432-5b", "ovs_interfaceid": "73007432-5bb0-435a-a871-05f59846a277", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.318 2 DEBUG nova.network.os_vif_util [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.319 2 DEBUG os_vif [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.320 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73007432-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.328 2 INFO os_vif [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:96:4c,bridge_name='br-int',has_traffic_filtering=True,id=73007432-5bb0-435a-a871-05f59846a277,network=Network(3c62a73d-d0d8-493b-b929-9ae564924767),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73007432-5b')#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.328 2 DEBUG nova.virt.libvirt.guest [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:name>tempest-TestNetworkBasicOps-server-61543066</nova:name>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:creationTime>2025-10-09 09:57:15</nova:creationTime>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:flavor name="m1.nano">
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:memory>128</nova:memory>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:disk>1</nova:disk>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:swap>0</nova:swap>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:vcpus>1</nova:vcpus>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </nova:flavor>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:owner>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </nova:owner>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  <nova:ports>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    <nova:port uuid="8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f">
Oct  9 09:57:15 compute-1 nova_compute[162974]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  9 09:57:15 compute-1 nova_compute[162974]:    </nova:port>
Oct  9 09:57:15 compute-1 nova_compute[162974]:  </nova:ports>
Oct  9 09:57:15 compute-1 nova_compute[162974]: </nova:instance>
Oct  9 09:57:15 compute-1 nova_compute[162974]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  9 09:57:15 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [NOTICE]   (167052) : haproxy version is 2.8.14-c23fe91
Oct  9 09:57:15 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [NOTICE]   (167052) : path to executable is /usr/sbin/haproxy
Oct  9 09:57:15 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [WARNING]  (167052) : Exiting Master process...
Oct  9 09:57:15 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [ALERT]    (167052) : Current worker (167054) exited with code 143 (Terminated)
Oct  9 09:57:15 compute-1 neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767[167047]: [WARNING]  (167052) : All workers exited. Exiting... (0)
Oct  9 09:57:15 compute-1 systemd[1]: libpod-bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8.scope: Deactivated successfully.
Oct  9 09:57:15 compute-1 podman[167078]: 2025-10-09 09:57:15.428548202 +0000 UTC m=+0.035297926 container stop bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:57:15 compute-1 conmon[167047]: conmon bd2151fd7758620b1ade <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8.scope/container/memory.events
Oct  9 09:57:15 compute-1 podman[167078]: 2025-10-09 09:57:15.43405669 +0000 UTC m=+0.040806453 container died bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:57:15 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8-userdata-shm.mount: Deactivated successfully.
Oct  9 09:57:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-9d101d488314606cd0605ba833244519bc1b143a95501ac83d26953afcdb780b-merged.mount: Deactivated successfully.
Oct  9 09:57:15 compute-1 podman[167078]: 2025-10-09 09:57:15.462833797 +0000 UTC m=+0.069583520 container cleanup bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  9 09:57:15 compute-1 systemd[1]: libpod-conmon-bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8.scope: Deactivated successfully.
Oct  9 09:57:15 compute-1 podman[167103]: 2025-10-09 09:57:15.526136644 +0000 UTC m=+0.036002676 container remove bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.532 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[714b36ea-eeda-402e-b9a9-db88f36d0ebb]: (4, ('Thu Oct  9 09:57:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767 (bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8)\nbd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8\nThu Oct  9 09:57:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767 (bd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8)\nbd2151fd7758620b1adee733735ad9c14742b4f8fb2f9dcfbcd20c6617cab7f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.534 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[40f5f5e0-9a57-456c-a40e-dac1c9f9eecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.535 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c62a73d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:15 compute-1 kernel: tap3c62a73d-d0: left promiscuous mode
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:15 compute-1 nova_compute[162974]: 2025-10-09 09:57:15.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.554 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6faeddc0-a52e-4d24-bd5b-a8cdae1e285a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.575 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c0755656-f3eb-4e78-a3e4-753c780a0e7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.576 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c4445cb1-c35f-4076-9342-6b47bcdf7402]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.591 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[db67513d-e449-4826-b8c8-3a38629f9d17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 151682, 'reachable_time': 42324, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 167121, 'error': None, 'target': 'ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:15 compute-1 systemd[1]: run-netns-ovnmeta\x2d3c62a73d\x2dd0d8\x2d493b\x2db929\x2d9ae564924767.mount: Deactivated successfully.
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.595 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c62a73d-d0d8-493b-b929-9ae564924767 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 09:57:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:15.595 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[b2822fa7-2ee2-479c-8405-97dbc16b28c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:15 compute-1 podman[167112]: 2025-10-09 09:57:15.665502009 +0000 UTC m=+0.088024717 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 09:57:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:15.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.311 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.311 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.312 2 DEBUG nova.network.neutron [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.330 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.330 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.330 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.331 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.331 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.331 2 WARNING nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 for instance with vm_state active and task_state None.#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.331 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-unplugged-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.331 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.332 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.332 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.332 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-unplugged-73007432-5bb0-435a-a871-05f59846a277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.332 2 WARNING nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-unplugged-73007432-5bb0-435a-a871-05f59846a277 for instance with vm_state active and task_state None.#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.332 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.333 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.333 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.333 2 DEBUG oslo_concurrency.lockutils [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.333 2 DEBUG nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:57:16 compute-1 nova_compute[162974]: 2025-10-09 09:57:16.333 2 WARNING nova.compute.manager [req-77d2b908-0f98-4af6-8802-2c9fbed94910 req-99c58aa0-f994-4e2b-8a69-6aee2c2590b7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-plugged-73007432-5bb0-435a-a871-05f59846a277 for instance with vm_state active and task_state None.#033[00m
Oct  9 09:57:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:16.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:17 compute-1 nova_compute[162974]: 2025-10-09 09:57:17.630 2 INFO nova.network.neutron [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Port 73007432-5bb0-435a-a871-05f59846a277 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  9 09:57:17 compute-1 nova_compute[162974]: 2025-10-09 09:57:17.630 2 DEBUG nova.network.neutron [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [{"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:57:17 compute-1 nova_compute[162974]: 2025-10-09 09:57:17.642 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:57:17 compute-1 nova_compute[162974]: 2025-10-09 09:57:17.656 2 DEBUG oslo_concurrency.lockutils [None req-c01782dc-90ba-4236-8829-01c3872c87cd 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-73007432-5bb0-435a-a871-05f59846a277" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.500s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:17 compute-1 nova_compute[162974]: 2025-10-09 09:57:17.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:17.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:18 compute-1 ovn_controller[62080]: 2025-10-09T09:57:18Z|00051|binding|INFO|Releasing lport 57354100-1abc-4399-a76b-c42eaec1ad73 from this chassis (sb_readonly=0)
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.418 2 DEBUG nova.compute.manager [req-7445ef2f-eba9-4340-83cb-6470982e4f3f req-3eb1609a-c1f0-45b0-a978-7fea2bb9192f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-deleted-73007432-5bb0-435a-a871-05f59846a277 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.535 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.535 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.536 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.536 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.536 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.538 2 INFO nova.compute.manager [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Terminating instance#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.539 2 DEBUG nova.compute.manager [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 09:57:18 compute-1 kernel: tap8d2d29b3-65 (unregistering): left promiscuous mode
Oct  9 09:57:18 compute-1 NetworkManager[982]: <info>  [1760003838.5755] device (tap8d2d29b3-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:18 compute-1 ovn_controller[62080]: 2025-10-09T09:57:18Z|00052|binding|INFO|Releasing lport 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f from this chassis (sb_readonly=0)
Oct  9 09:57:18 compute-1 ovn_controller[62080]: 2025-10-09T09:57:18Z|00053|binding|INFO|Setting lport 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f down in Southbound
Oct  9 09:57:18 compute-1 ovn_controller[62080]: 2025-10-09T09:57:18Z|00054|binding|INFO|Removing iface tap8d2d29b3-65 ovn-installed in OVS
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.588 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:30:c8 10.100.0.7'], port_security=['fa:16:3e:4d:30:c8 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26c660ed-37e9-4f44-b603-3901342edf9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2fd66aef-c4b5-4f4c-ae18-6ccc210d224e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4edfdfe9-a5ca-4224-9930-4324a48b984f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.589 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f in datapath 26c660ed-37e9-4f44-b603-3901342edf9b unbound from our chassis#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.590 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26c660ed-37e9-4f44-b603-3901342edf9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.591 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[889646e7-a501-4582-9582-e226039fb146]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.593 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b namespace which is not needed anymore#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:18 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct  9 09:57:18 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 11.310s CPU time.
Oct  9 09:57:18 compute-1 systemd-machined[120683]: Machine qemu-2-instance-00000003 terminated.
Oct  9 09:57:18 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [NOTICE]   (166695) : haproxy version is 2.8.14-c23fe91
Oct  9 09:57:18 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [NOTICE]   (166695) : path to executable is /usr/sbin/haproxy
Oct  9 09:57:18 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [WARNING]  (166695) : Exiting Master process...
Oct  9 09:57:18 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [ALERT]    (166695) : Current worker (166697) exited with code 143 (Terminated)
Oct  9 09:57:18 compute-1 neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b[166691]: [WARNING]  (166695) : All workers exited. Exiting... (0)
Oct  9 09:57:18 compute-1 systemd[1]: libpod-e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c.scope: Deactivated successfully.
Oct  9 09:57:18 compute-1 podman[167154]: 2025-10-09 09:57:18.709027252 +0000 UTC m=+0.034838489 container died e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  9 09:57:18 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c-userdata-shm.mount: Deactivated successfully.
Oct  9 09:57:18 compute-1 systemd[1]: var-lib-containers-storage-overlay-08a45d62be4bd9ce9df4641fb90075d2091d46c2e93ad8f4010bfee1112d2e50-merged.mount: Deactivated successfully.
Oct  9 09:57:18 compute-1 podman[167154]: 2025-10-09 09:57:18.733307711 +0000 UTC m=+0.059118947 container cleanup e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  9 09:57:18 compute-1 systemd[1]: libpod-conmon-e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c.scope: Deactivated successfully.
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.767 2 INFO nova.virt.libvirt.driver [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance destroyed successfully.#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.768 2 DEBUG nova.objects.instance [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:57:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.783 2 DEBUG nova.virt.libvirt.vif [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:56:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-61543066',display_name='tempest-TestNetworkBasicOps-server-61543066',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-61543066',id=3,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnE/dh71/I//FMlnppXDYKeeVJI2AqRfz3zTsFDUtMRPxSA9tfNCqu4Aqk04nGOjV/84C+cdkyXsPC0ZVfjXVfqYm026xBvCeeUUr4XUs/4snX/KNbtJXkvo3sUoZJ5aQ==',key_name='tempest-TestNetworkBasicOps-764715585',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:56:47Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-r33pvc0t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:56:47Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.784 2 DEBUG nova.network.os_vif_util [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "address": "fa:16:3e:4d:30:c8", "network": {"id": "26c660ed-37e9-4f44-b603-3901342edf9b", "bridge": "br-int", "label": "tempest-network-smoke--903239616", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d2d29b3-65", "ovs_interfaceid": "8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.785 2 DEBUG nova.network.os_vif_util [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.785 2 DEBUG os_vif [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:18 compute-1 podman[167178]: 2025-10-09 09:57:18.789220061 +0000 UTC m=+0.033364409 container remove e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d2d29b3-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.798 2 INFO os_vif [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:30:c8,bridge_name='br-int',has_traffic_filtering=True,id=8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f,network=Network(26c660ed-37e9-4f44-b603-3901342edf9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d2d29b3-65')#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.795 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd3fa96-326b-4a0e-8760-729fe3a85c1e]: (4, ('Thu Oct  9 09:57:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b (e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c)\ne8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c\nThu Oct  9 09:57:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b (e8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c)\ne8b9e9b6e4763889dd739382d351703a3bed85c9dfd5a52549047a322729930c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.800 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f6df52-84de-44b3-b37b-371e2828c45d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.801 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26c660ed-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:18 compute-1 kernel: tap26c660ed-30: left promiscuous mode
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.820 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c52b2619-db31-4917-b79c-43a02e470b73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.825 2 DEBUG nova.compute.manager [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-unplugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.826 2 DEBUG oslo_concurrency.lockutils [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.826 2 DEBUG oslo_concurrency.lockutils [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.826 2 DEBUG oslo_concurrency.lockutils [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.826 2 DEBUG nova.compute.manager [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-unplugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.826 2 DEBUG nova.compute.manager [req-f743ec9e-07e9-4765-8aee-b6cba8e7eed8 req-ea22a009-54ee-47f8-a890-496b9bd01fa3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-unplugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.840 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6d70eb01-fd37-40f6-8254-ea6487705924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.841 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c7a34418-1198-421e-82f2-fea7612ee299]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.857 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[da5c976d-bff2-444c-903b-7da7e85b973f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 149041, 'reachable_time': 41097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 167219, 'error': None, 'target': 'ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:18 compute-1 systemd[1]: run-netns-ovnmeta\x2d26c660ed\x2d37e9\x2d4f44\x2db603\x2d3901342edf9b.mount: Deactivated successfully.
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.860 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26c660ed-37e9-4f44-b603-3901342edf9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 09:57:18 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:18.861 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4bae9c-1628-43d4-85c1-440c367f6f77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.987 2 INFO nova.virt.libvirt.driver [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Deleting instance files /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_del#033[00m
Oct  9 09:57:18 compute-1 nova_compute[162974]: 2025-10-09 09:57:18.988 2 INFO nova.virt.libvirt.driver [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Deletion of /var/lib/nova/instances/e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01_del complete#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.025 2 INFO nova.compute.manager [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Took 0.49 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.026 2 DEBUG oslo.service.loopingcall [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.027 2 DEBUG nova.compute.manager [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.027 2 DEBUG nova.network.neutron [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.395 2 DEBUG nova.network.neutron [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.403 2 INFO nova.compute.manager [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Took 0.38 seconds to deallocate network for instance.#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.434 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.434 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.474 2 DEBUG oslo_concurrency.processutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:57:19 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3832152381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.835 2 DEBUG oslo_concurrency.processutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.840 2 DEBUG nova.compute.provider_tree [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.851 2 DEBUG nova.scheduler.client.report [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.863 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.881 2 INFO nova.scheduler.client.report [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01#033[00m
Oct  9 09:57:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:19.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:19 compute-1 nova_compute[162974]: 2025-10-09 09:57:19.930 2 DEBUG oslo_concurrency.lockutils [None req-5f0189fa-2a02-418c-868c-5744b6d06164 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.483 2 DEBUG nova.compute.manager [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.483 2 DEBUG nova.compute.manager [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing instance network info cache due to event network-changed-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.483 2 DEBUG oslo_concurrency.lockutils [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.483 2 DEBUG oslo_concurrency.lockutils [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.484 2 DEBUG nova.network.neutron [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Refreshing network info cache for port 8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:57:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:20.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.870 2 DEBUG nova.compute.manager [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.870 2 DEBUG oslo_concurrency.lockutils [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.871 2 DEBUG oslo_concurrency.lockutils [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.871 2 DEBUG oslo_concurrency.lockutils [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.871 2 DEBUG nova.compute.manager [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] No waiting events found dispatching network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.871 2 WARNING nova.compute.manager [req-9669c6de-ca1b-40fd-b041-1c69dca7f312 req-6f2fddd2-3baa-4bb7-a9d6-2a8212e42af3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received unexpected event network-vif-plugged-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f for instance with vm_state deleted and task_state None.#033[00m
Oct  9 09:57:20 compute-1 nova_compute[162974]: 2025-10-09 09:57:20.914 2 DEBUG nova.network.neutron [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 09:57:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:21.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:21 compute-1 nova_compute[162974]: 2025-10-09 09:57:21.929 2 DEBUG nova.network.neutron [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  9 09:57:21 compute-1 nova_compute[162974]: 2025-10-09 09:57:21.930 2 DEBUG oslo_concurrency.lockutils [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:57:21 compute-1 nova_compute[162974]: 2025-10-09 09:57:21.930 2 DEBUG nova.compute.manager [req-4cdefdbc-a6e5-41e6-91c2-add187c4ad72 req-62da240f-ab68-46eb-9610-88a5a39b9af9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Received event network-vif-deleted-8d2d29b3-6509-4de5-a0a2-f7ef942f9a7f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:22 compute-1 nova_compute[162974]: 2025-10-09 09:57:22.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:23 compute-1 nova_compute[162974]: 2025-10-09 09:57:23.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:23.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:24 compute-1 podman[167247]: 2025-10-09 09:57:24.542330785 +0000 UTC m=+0.046361519 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true)
Oct  9 09:57:24 compute-1 podman[167246]: 2025-10-09 09:57:24.554248419 +0000 UTC m=+0.056443572 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Oct  9 09:57:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:24.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:25.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:26.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:27 compute-1 nova_compute[162974]: 2025-10-09 09:57:27.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:27 compute-1 nova_compute[162974]: 2025-10-09 09:57:27.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:27 compute-1 nova_compute[162974]: 2025-10-09 09:57:27.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:27.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:28.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:28 compute-1 nova_compute[162974]: 2025-10-09 09:57:28.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:29.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:30.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:31.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:32 compute-1 nova_compute[162974]: 2025-10-09 09:57:32.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:32.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:33 compute-1 nova_compute[162974]: 2025-10-09 09:57:33.767 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760003838.765122, e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:57:33 compute-1 nova_compute[162974]: 2025-10-09 09:57:33.768 2 INFO nova.compute.manager [-] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] VM Stopped (Lifecycle Event)#033[00m
Oct  9 09:57:33 compute-1 nova_compute[162974]: 2025-10-09 09:57:33.798 2 DEBUG nova.compute.manager [None req-402efd19-3943-4f94-a2ab-db2c1852f20e - - - - - -] [instance: e9c8bd36-8fc8-4412-a6ea-b5b3e5b79f01] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:57:33 compute-1 nova_compute[162974]: 2025-10-09 09:57:33.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:33.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:34.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.358812) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855358843, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2388, "num_deletes": 251, "total_data_size": 6396320, "memory_usage": 6495288, "flush_reason": "Manual Compaction"}
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855369361, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4153932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20689, "largest_seqno": 23072, "table_properties": {"data_size": 4144126, "index_size": 6236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20058, "raw_average_key_size": 20, "raw_value_size": 4124479, "raw_average_value_size": 4187, "num_data_blocks": 272, "num_entries": 985, "num_filter_entries": 985, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003650, "oldest_key_time": 1760003650, "file_creation_time": 1760003855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 10721 microseconds, and 7612 cpu microseconds.
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.369535) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4153932 bytes OK
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.369623) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.370064) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.370075) EVENT_LOG_v1 {"time_micros": 1760003855370072, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.370086) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6385725, prev total WAL file size 6385725, number of live WAL files 2.
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.372208) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(4056KB)], [39(12MB)]
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855372231, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 16775535, "oldest_snapshot_seqno": -1}
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5402 keys, 14601163 bytes, temperature: kUnknown
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855415839, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 14601163, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14562758, "index_size": 23767, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136207, "raw_average_key_size": 25, "raw_value_size": 14462275, "raw_average_value_size": 2677, "num_data_blocks": 981, "num_entries": 5402, "num_filter_entries": 5402, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.416441) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14601163 bytes
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.416903) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 380.2 rd, 331.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.0 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 5926, records dropped: 524 output_compression: NoCompression
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.416916) EVENT_LOG_v1 {"time_micros": 1760003855416911, "job": 22, "event": "compaction_finished", "compaction_time_micros": 44118, "compaction_time_cpu_micros": 20863, "output_level": 6, "num_output_files": 1, "total_output_size": 14601163, "num_input_records": 5926, "num_output_records": 5402, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855417869, "job": 22, "event": "table_file_deletion", "file_number": 41}
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855419648, "job": 22, "event": "table_file_deletion", "file_number": 39}
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.371605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.419773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.419776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.419777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.419778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:57:35.419779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:35.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:36 compute-1 podman[167311]: 2025-10-09 09:57:36.578348912 +0000 UTC m=+0.081682950 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct  9 09:57:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:36.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:37 compute-1 nova_compute[162974]: 2025-10-09 09:57:37.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:37.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.222 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.222 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.237 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.293 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.294 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.298 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.298 2 INFO nova.compute.claims [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.402 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:38 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:57:38 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/138502189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.750 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.755 2 DEBUG nova.compute.provider_tree [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.770 2 DEBUG nova.scheduler.client.report [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.791 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.792 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 09:57:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:38.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.837 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.837 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.853 2 INFO nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.862 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.924 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.924 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.925 2 INFO nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Creating image(s)#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.943 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.959 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.974 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.976 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:38 compute-1 nova_compute[162974]: 2025-10-09 09:57:38.990 2 DEBUG nova.policy [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.025 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.026 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.026 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.027 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.044 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.047 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb e811a931-a3de-4684-8b2f-e916788f6ea9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.178 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb e811a931-a3de-4684-8b2f-e916788f6ea9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.222 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.275 2 DEBUG nova.objects.instance [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid e811a931-a3de-4684-8b2f-e916788f6ea9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.290 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.290 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Ensure instance console log exists: /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.291 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.291 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.291 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:39 compute-1 nova_compute[162974]: 2025-10-09 09:57:39.555 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Successfully created port: 5fdcca80-237d-4123-b2d6-a46f90186d0b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 09:57:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:39.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:40.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:41.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:42 compute-1 nova_compute[162974]: 2025-10-09 09:57:42.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:42.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:43 compute-1 nova_compute[162974]: 2025-10-09 09:57:43.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:43.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:44.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.838 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Successfully updated port: 5fdcca80-237d-4123-b2d6-a46f90186d0b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.852 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.852 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.852 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.927 2 DEBUG nova.compute.manager [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.927 2 DEBUG nova.compute.manager [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing instance network info cache due to event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:57:44 compute-1 nova_compute[162974]: 2025-10-09 09:57:44.927 2 DEBUG oslo_concurrency.lockutils [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:57:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:45.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:45 compute-1 nova_compute[162974]: 2025-10-09 09:57:45.964 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 09:57:46 compute-1 podman[167528]: 2025-10-09 09:57:46.525232153 +0000 UTC m=+0.037298417 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  9 09:57:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:46.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:47 compute-1 nova_compute[162974]: 2025-10-09 09:57:47.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:47.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.168 2 DEBUG nova.network.neutron [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.181 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.181 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Instance network_info: |[{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.181 2 DEBUG oslo_concurrency.lockutils [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.182 2 DEBUG nova.network.neutron [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.184 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Start _get_guest_xml network_info=[{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.187 2 WARNING nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.190 2 DEBUG nova.virt.libvirt.host [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.191 2 DEBUG nova.virt.libvirt.host [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.194 2 DEBUG nova.virt.libvirt.host [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.195 2 DEBUG nova.virt.libvirt.host [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.195 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.195 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.195 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.196 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.196 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.196 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.196 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.196 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.197 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.197 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.197 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.197 2 DEBUG nova.virt.hardware [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.199 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:57:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2737454131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.540 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.557 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.559 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:48.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:57:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1167411416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.897 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.898 2 DEBUG nova.virt.libvirt.vif [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1899878609',display_name='tempest-TestNetworkBasicOps-server-1899878609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1899878609',id=4,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZm4C6LRAPtfAr5m77K3NqQxZMtrMltDZaOJjL5VWwqcCmgw5WghdaHagMLuObgYdNXZ08m9cLFMwpCyPUmMwXoTGjd15bkV3f92hF1qRvuScT4iCVTrgjr7uJ/wKpdPQ==',key_name='tempest-TestNetworkBasicOps-1948988860',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-u8otko1l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:57:38Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e811a931-a3de-4684-8b2f-e916788f6ea9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.899 2 DEBUG nova.network.os_vif_util [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.899 2 DEBUG nova.network.os_vif_util [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.900 2 DEBUG nova.objects.instance [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid e811a931-a3de-4684-8b2f-e916788f6ea9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.912 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] End _get_guest_xml xml=<domain type="kvm">
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <uuid>e811a931-a3de-4684-8b2f-e916788f6ea9</uuid>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <name>instance-00000004</name>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <memory>131072</memory>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <vcpu>1</vcpu>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <metadata>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <nova:name>tempest-TestNetworkBasicOps-server-1899878609</nova:name>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <nova:creationTime>2025-10-09 09:57:48</nova:creationTime>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <nova:flavor name="m1.nano">
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <nova:memory>128</nova:memory>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <nova:disk>1</nova:disk>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <nova:swap>0</nova:swap>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <nova:vcpus>1</nova:vcpus>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      </nova:flavor>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <nova:owner>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      </nova:owner>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <nova:ports>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <nova:port uuid="5fdcca80-237d-4123-b2d6-a46f90186d0b">
Oct  9 09:57:48 compute-1 nova_compute[162974]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        </nova:port>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      </nova:ports>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    </nova:instance>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  </metadata>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <sysinfo type="smbios">
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <system>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <entry name="manufacturer">RDO</entry>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <entry name="product">OpenStack Compute</entry>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <entry name="serial">e811a931-a3de-4684-8b2f-e916788f6ea9</entry>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <entry name="uuid">e811a931-a3de-4684-8b2f-e916788f6ea9</entry>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <entry name="family">Virtual Machine</entry>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    </system>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  </sysinfo>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <os>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <boot dev="hd"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <smbios mode="sysinfo"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  </os>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <features>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <acpi/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <apic/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <vmcoreinfo/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  </features>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <clock offset="utc">
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <timer name="hpet" present="no"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  </clock>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <cpu mode="host-model" match="exact">
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  </cpu>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  <devices>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <disk type="network" device="disk">
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/e811a931-a3de-4684-8b2f-e916788f6ea9_disk">
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      </source>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <target dev="vda" bus="virtio"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <disk type="network" device="cdrom">
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config">
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      </source>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 09:57:48 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <target dev="sda" bus="sata"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <interface type="ethernet">
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <mac address="fa:16:3e:00:48:22"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <mtu size="1442"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <target dev="tap5fdcca80-23"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <serial type="pty">
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <log file="/var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/console.log" append="off"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    </serial>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <video>
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    </video>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <input type="tablet" bus="usb"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <rng model="virtio">
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <backend model="random">/dev/urandom</backend>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    </rng>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <controller type="usb" index="0"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    <memballoon model="virtio">
Oct  9 09:57:48 compute-1 nova_compute[162974]:      <stats period="10"/>
Oct  9 09:57:48 compute-1 nova_compute[162974]:    </memballoon>
Oct  9 09:57:48 compute-1 nova_compute[162974]:  </devices>
Oct  9 09:57:48 compute-1 nova_compute[162974]: </domain>
Oct  9 09:57:48 compute-1 nova_compute[162974]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.913 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Preparing to wait for external event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.913 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.914 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.914 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.914 2 DEBUG nova.virt.libvirt.vif [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1899878609',display_name='tempest-TestNetworkBasicOps-server-1899878609',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1899878609',id=4,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZm4C6LRAPtfAr5m77K3NqQxZMtrMltDZaOJjL5VWwqcCmgw5WghdaHagMLuObgYdNXZ08m9cLFMwpCyPUmMwXoTGjd15bkV3f92hF1qRvuScT4iCVTrgjr7uJ/wKpdPQ==',key_name='tempest-TestNetworkBasicOps-1948988860',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-u8otko1l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:57:38Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e811a931-a3de-4684-8b2f-e916788f6ea9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.915 2 DEBUG nova.network.os_vif_util [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.915 2 DEBUG nova.network.os_vif_util [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.915 2 DEBUG os_vif [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fdcca80-23, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.919 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fdcca80-23, col_values=(('external_ids', {'iface-id': '5fdcca80-237d-4123-b2d6-a46f90186d0b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:48:22', 'vm-uuid': 'e811a931-a3de-4684-8b2f-e916788f6ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:48 compute-1 NetworkManager[982]: <info>  [1760003868.9206] manager: (tap5fdcca80-23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.924 2 INFO os_vif [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23')#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.966 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.967 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.967 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:00:48:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.967 2 INFO nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Using config drive#033[00m
Oct  9 09:57:48 compute-1 nova_compute[162974]: 2025-10-09 09:57:48.985 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.132 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.132 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.133 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.133 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.133 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.173 2 DEBUG nova.network.neutron [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updated VIF entry in instance network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.174 2 DEBUG nova.network.neutron [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.186 2 DEBUG oslo_concurrency.lockutils [req-0b9e900a-c327-40ae-ab37-da2826536dea req-24b53dcc-af14-4033-8802-79e4af236b43 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.255 2 INFO nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Creating config drive at /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.259 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv5rdh7hh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.379 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv5rdh7hh" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.399 2 DEBUG nova.storage.rbd_utils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.402 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:57:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/938447497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.476 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.484 2 DEBUG oslo_concurrency.processutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config e811a931-a3de-4684-8b2f-e916788f6ea9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.485 2 INFO nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Deleting local config drive /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9/disk.config because it was imported into RBD.#033[00m
Oct  9 09:57:49 compute-1 kernel: tap5fdcca80-23: entered promiscuous mode
Oct  9 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.5159] manager: (tap5fdcca80-23): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct  9 09:57:49 compute-1 ovn_controller[62080]: 2025-10-09T09:57:49Z|00055|binding|INFO|Claiming lport 5fdcca80-237d-4123-b2d6-a46f90186d0b for this chassis.
Oct  9 09:57:49 compute-1 ovn_controller[62080]: 2025-10-09T09:57:49Z|00056|binding|INFO|5fdcca80-237d-4123-b2d6-a46f90186d0b: Claiming fa:16:3e:00:48:22 10.100.0.3
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.527 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:48:22 10.100.0.3'], port_security=['fa:16:3e:00:48:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e811a931-a3de-4684-8b2f-e916788f6ea9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ab824ab-8ac2-4d9c-9d6e-9bbdb4458228', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6496ebe5-cfc3-4a35-b1e6-27021c277fad, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=5fdcca80-237d-4123-b2d6-a46f90186d0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.527 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 5fdcca80-237d-4123-b2d6-a46f90186d0b in datapath 48ce5fca-3386-4b8a-82e2-88fc71a94881 bound to our chassis#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.528 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48ce5fca-3386-4b8a-82e2-88fc71a94881#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.537 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[dc763a01-c71a-4385-bba5-1a2d27011448]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.537 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48ce5fca-31 in ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.539 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48ce5fca-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.539 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[14235a8e-6c8c-47d0-b0ea-4ff15b62fa46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.540 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[eb51ca5a-1016-4f66-a5f1-612133823d51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 systemd-udevd[167704]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:57:49 compute-1 systemd-machined[120683]: New machine qemu-3-instance-00000004.
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.551 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[abc1ba05-65ba-4416-853d-616480e105a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.5552] device (tap5fdcca80-23): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.5558] device (tap5fdcca80-23): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 09:57:49 compute-1 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.576 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[5e654eb3-54f8-42ea-8022-c299bac52527]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:49 compute-1 ovn_controller[62080]: 2025-10-09T09:57:49Z|00057|binding|INFO|Setting lport 5fdcca80-237d-4123-b2d6-a46f90186d0b ovn-installed in OVS
Oct  9 09:57:49 compute-1 ovn_controller[62080]: 2025-10-09T09:57:49Z|00058|binding|INFO|Setting lport 5fdcca80-237d-4123-b2d6-a46f90186d0b up in Southbound
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.603 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9cb2cb-9e42-424d-bcc2-9e53eac8e6a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.6071] manager: (tap48ce5fca-30): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.608 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1e140195-8e76-40f9-9b78-f2d212c21bf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.633 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[ef11bf2b-ebf1-4718-84df-3c5bcd3be953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.636 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5aff04-5e9b-465a-aa00-e590fa20cf8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.6547] device (tap48ce5fca-30): carrier: link connected
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.658 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[a1b405e9-004d-43b3-85e6-cac21eef1e79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.669 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[844fa257-a68f-485d-bd61-c027dcbb01ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48ce5fca-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:a8:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 155322, 'reachable_time': 20529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 167729, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.671 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.671 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.683 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8aec32-cf46-4256-b191-775854531a1f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:a8ee'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 155322, 'tstamp': 155322}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 167730, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.699 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e183ae-6464-4171-8588-81fb6d42f456]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48ce5fca-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:a8:ee'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 155322, 'reachable_time': 20529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 167731, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.729 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9f1a9ee2-7430-4288-8293-f839ada5b0ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.791 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[abb8a40d-00bf-4adb-b55c-50fddb53f42f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.792 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48ce5fca-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.793 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.793 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48ce5fca-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.796 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:49 compute-1 NetworkManager[982]: <info>  [1760003869.7968] manager: (tap48ce5fca-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct  9 09:57:49 compute-1 kernel: tap48ce5fca-30: entered promiscuous mode
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.803 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48ce5fca-30, col_values=(('external_ids', {'iface-id': 'b85a0af7-8e0c-4129-9420-36103d8f1eb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:49 compute-1 ovn_controller[62080]: 2025-10-09T09:57:49Z|00059|binding|INFO|Releasing lport b85a0af7-8e0c-4129-9420-36103d8f1eb6 from this chassis (sb_readonly=0)
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.805 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48ce5fca-3386-4b8a-82e2-88fc71a94881.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48ce5fca-3386-4b8a-82e2-88fc71a94881.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.805 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[11593792-87e7-477e-8b57-cf982b81a5de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.806 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: global
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    log         /dev/log local0 debug
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    log-tag     haproxy-metadata-proxy-48ce5fca-3386-4b8a-82e2-88fc71a94881
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    user        root
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    group       root
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    maxconn     1024
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    pidfile     /var/lib/neutron/external/pids/48ce5fca-3386-4b8a-82e2-88fc71a94881.pid.haproxy
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    daemon
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: defaults
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    log global
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    mode http
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    option httplog
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    option dontlognull
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    option http-server-close
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    option forwardfor
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    retries                 3
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    timeout http-request    30s
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    timeout connect         30s
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    timeout client          32s
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    timeout server          32s
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    timeout http-keep-alive 30s
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: listen listener
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    bind 169.254.169.254:80
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]:    http-request add-header X-OVN-Network-ID 48ce5fca-3386-4b8a-82e2-88fc71a94881
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 09:57:49 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:57:49.808 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'env', 'PROCESS_TAG=haproxy-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48ce5fca-3386-4b8a-82e2-88fc71a94881.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.853 2 DEBUG nova.compute.manager [req-5ab289bc-50f5-4ed6-9190-55f5f303a2e7 req-0c35abbb-7c16-4d66-b7f4-ffe9614ee18c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.853 2 DEBUG oslo_concurrency.lockutils [req-5ab289bc-50f5-4ed6-9190-55f5f303a2e7 req-0c35abbb-7c16-4d66-b7f4-ffe9614ee18c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.854 2 DEBUG oslo_concurrency.lockutils [req-5ab289bc-50f5-4ed6-9190-55f5f303a2e7 req-0c35abbb-7c16-4d66-b7f4-ffe9614ee18c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.854 2 DEBUG oslo_concurrency.lockutils [req-5ab289bc-50f5-4ed6-9190-55f5f303a2e7 req-0c35abbb-7c16-4d66-b7f4-ffe9614ee18c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.854 2 DEBUG nova.compute.manager [req-5ab289bc-50f5-4ed6-9190-55f5f303a2e7 req-0c35abbb-7c16-4d66-b7f4-ffe9614ee18c b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Processing event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.914 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.916 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5054MB free_disk=59.967525482177734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.916 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.916 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:57:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:49.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.982 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance e811a931-a3de-4684-8b2f-e916788f6ea9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.983 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:57:49 compute-1 nova_compute[162974]: 2025-10-09 09:57:49.983 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.011 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:50 compute-1 podman[167813]: 2025-10-09 09:57:50.149661685 +0000 UTC m=+0.038714668 container create 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  9 09:57:50 compute-1 systemd[1]: Started libpod-conmon-50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f.scope.
Oct  9 09:57:50 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:57:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596f16e2f74877f4af0737f9ce5c377193e245dbce014e179b5c36f8fe3efb0f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:57:50 compute-1 podman[167813]: 2025-10-09 09:57:50.202863794 +0000 UTC m=+0.091916787 container init 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  9 09:57:50 compute-1 podman[167813]: 2025-10-09 09:57:50.207757152 +0000 UTC m=+0.096810134 container start 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:57:50 compute-1 podman[167813]: 2025-10-09 09:57:50.130936333 +0000 UTC m=+0.019989326 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:57:50 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [NOTICE]   (167838) : New worker (167840) forked
Oct  9 09:57:50 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [NOTICE]   (167838) : Loading success.
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.383 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.388 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.394 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003870.3939822, e811a931-a3de-4684-8b2f-e916788f6ea9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.394 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] VM Started (Lifecycle Event)#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.396 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.398 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.400 2 INFO nova.virt.libvirt.driver [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Instance spawned successfully.#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.400 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.413 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.419 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.422 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.422 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.422 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.423 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.423 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.423 2 DEBUG nova.virt.libvirt.driver [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.427 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.431 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.431 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.515s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.459 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.459 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003870.394087, e811a931-a3de-4684-8b2f-e916788f6ea9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.460 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] VM Paused (Lifecycle Event)#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.474 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.476 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003870.3977108, e811a931-a3de-4684-8b2f-e916788f6ea9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.476 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] VM Resumed (Lifecycle Event)#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.483 2 INFO nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Took 11.56 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.483 2 DEBUG nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.498 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.500 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.520 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.542 2 INFO nova.compute.manager [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Took 12.27 seconds to build instance.#033[00m
Oct  9 09:57:50 compute-1 nova_compute[162974]: 2025-10-09 09:57:50.556 2 DEBUG oslo_concurrency.lockutils [None req-9b752aa4-6344-4720-be3c-a24dbc83691b 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:50.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.431 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.431 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.432 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.432 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.923 2 DEBUG nova.compute.manager [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.923 2 DEBUG oslo_concurrency.lockutils [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.924 2 DEBUG oslo_concurrency.lockutils [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.924 2 DEBUG oslo_concurrency.lockutils [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.924 2 DEBUG nova.compute.manager [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] No waiting events found dispatching network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:57:51 compute-1 nova_compute[162974]: 2025-10-09 09:57:51.924 2 WARNING nova.compute.manager [req-5e9584d9-12ca-4074-9d02-3d4811e073d9 req-531e81c4-6b7f-495c-adf3-25ac264a5aa7 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received unexpected event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b for instance with vm_state active and task_state None.#033[00m
Oct  9 09:57:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:51.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:52 compute-1 nova_compute[162974]: 2025-10-09 09:57:52.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:52.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.401 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.402 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquired lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.403 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.403 2 DEBUG nova.objects.instance [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e811a931-a3de-4684-8b2f-e916788f6ea9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:57:53 compute-1 ovn_controller[62080]: 2025-10-09T09:57:53Z|00060|binding|INFO|Releasing lport b85a0af7-8e0c-4129-9420-36103d8f1eb6 from this chassis (sb_readonly=0)
Oct  9 09:57:53 compute-1 NetworkManager[982]: <info>  [1760003873.6378] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct  9 09:57:53 compute-1 NetworkManager[982]: <info>  [1760003873.6386] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:53 compute-1 ovn_controller[62080]: 2025-10-09T09:57:53Z|00061|binding|INFO|Releasing lport b85a0af7-8e0c-4129-9420-36103d8f1eb6 from this chassis (sb_readonly=0)
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:53 compute-1 nova_compute[162974]: 2025-10-09 09:57:53.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:53.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:57:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3718288945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.425 2 DEBUG nova.compute.manager [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.425 2 DEBUG nova.compute.manager [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing instance network info cache due to event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.426 2 DEBUG oslo_concurrency.lockutils [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:57:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:54.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.972 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.987 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Releasing lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.988 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.988 2 DEBUG oslo_concurrency.lockutils [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.988 2 DEBUG nova.network.neutron [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.989 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.989 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:54 compute-1 nova_compute[162974]: 2025-10-09 09:57:54.989 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:55 compute-1 podman[167876]: 2025-10-09 09:57:55.540428542 +0000 UTC m=+0.048246464 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  9 09:57:55 compute-1 podman[167877]: 2025-10-09 09:57:55.54625549 +0000 UTC m=+0.051515218 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct  9 09:57:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:55.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:55 compute-1 nova_compute[162974]: 2025-10-09 09:57:55.984 2 DEBUG nova.network.neutron [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updated VIF entry in instance network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:57:55 compute-1 nova_compute[162974]: 2025-10-09 09:57:55.984 2 DEBUG nova.network.neutron [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:57:55 compute-1 nova_compute[162974]: 2025-10-09 09:57:55.997 2 DEBUG oslo_concurrency.lockutils [req-222cdabb-6f71-4c2a-805c-ebf374f1b609 req-d2b90c39-908f-4f17-8039-f88cc7ffd9a9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:57:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:56.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:57 compute-1 nova_compute[162974]: 2025-10-09 09:57:57.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:57.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:58.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:58 compute-1 nova_compute[162974]: 2025-10-09 09:57:58.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:57:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:57:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:59.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:00.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:01 compute-1 ovn_controller[62080]: 2025-10-09T09:58:01Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:48:22 10.100.0.3
Oct  9 09:58:01 compute-1 ovn_controller[62080]: 2025-10-09T09:58:01Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:48:22 10.100.0.3
Oct  9 09:58:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:01.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:02 compute-1 nova_compute[162974]: 2025-10-09 09:58:02.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:02.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:03 compute-1 nova_compute[162974]: 2025-10-09 09:58:03.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:03.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:04.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:05.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:06 compute-1 nova_compute[162974]: 2025-10-09 09:58:06.780 2 INFO nova.compute.manager [None req-b9af8cfb-f0a5-41ac-beff-3ef3de796fb0 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Get console output#033[00m
Oct  9 09:58:06 compute-1 nova_compute[162974]: 2025-10-09 09:58:06.784 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 09:58:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:06.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:07 compute-1 podman[167919]: 2025-10-09 09:58:07.557304627 +0000 UTC m=+0.062767366 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  9 09:58:07 compute-1 nova_compute[162974]: 2025-10-09 09:58:07.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:07.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.758 2 DEBUG nova.compute.manager [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.758 2 DEBUG nova.compute.manager [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing instance network info cache due to event network-changed-5fdcca80-237d-4123-b2d6-a46f90186d0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.758 2 DEBUG oslo_concurrency.lockutils [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.758 2 DEBUG oslo_concurrency.lockutils [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.759 2 DEBUG nova.network.neutron [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Refreshing network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:58:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:08.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:08 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:08.859 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:58:08 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:08.860 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:58:08 compute-1 nova_compute[162974]: 2025-10-09 09:58:08.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:09.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:10.036 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:10.036 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:10.037 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:10.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:10 compute-1 nova_compute[162974]: 2025-10-09 09:58:10.946 2 DEBUG nova.network.neutron [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updated VIF entry in instance network info cache for port 5fdcca80-237d-4123-b2d6-a46f90186d0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:58:10 compute-1 nova_compute[162974]: 2025-10-09 09:58:10.947 2 DEBUG nova.network.neutron [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [{"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:58:10 compute-1 nova_compute[162974]: 2025-10-09 09:58:10.962 2 DEBUG oslo_concurrency.lockutils [req-c622f483-9f94-4a61-9ff1-49b2a3042b79 req-e1adcf68-bbb0-45c5-a178-e0f7a6280c63 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-e811a931-a3de-4684-8b2f-e916788f6ea9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:11 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:58:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:11.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:12 compute-1 nova_compute[162974]: 2025-10-09 09:58:12.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:12.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:13 compute-1 nova_compute[162974]: 2025-10-09 09:58:13.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:13.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:14.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:15 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:15.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:16.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:17 compute-1 podman[168076]: 2025-10-09 09:58:17.535560672 +0000 UTC m=+0.043249079 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  9 09:58:17 compute-1 nova_compute[162974]: 2025-10-09 09:58:17.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:17 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:17.863 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:17.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:18.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:18 compute-1 nova_compute[162974]: 2025-10-09 09:58:18.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:20.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:20.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:22.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:22 compute-1 nova_compute[162974]: 2025-10-09 09:58:22.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:22.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:23 compute-1 nova_compute[162974]: 2025-10-09 09:58:23.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:24.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:24.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:26.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:26 compute-1 podman[168099]: 2025-10-09 09:58:26.534310018 +0000 UTC m=+0.036185023 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct  9 09:58:26 compute-1 podman[168098]: 2025-10-09 09:58:26.558354358 +0000 UTC m=+0.062207822 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 09:58:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:26.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:27 compute-1 nova_compute[162974]: 2025-10-09 09:58:27.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:28.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:28.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:28 compute-1 nova_compute[162974]: 2025-10-09 09:58:28.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:30.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:30.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:32.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:32 compute-1 nova_compute[162974]: 2025-10-09 09:58:32.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:32.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:33 compute-1 nova_compute[162974]: 2025-10-09 09:58:33.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:34.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:34.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:36.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:36.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:37 compute-1 nova_compute[162974]: 2025-10-09 09:58:37.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:38.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:38 compute-1 podman[168162]: 2025-10-09 09:58:38.587536016 +0000 UTC m=+0.085116119 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 09:58:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:38.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:38 compute-1 nova_compute[162974]: 2025-10-09 09:58:38.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:40.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:40.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:42.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:42 compute-1 nova_compute[162974]: 2025-10-09 09:58:42.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:42.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:43 compute-1 nova_compute[162974]: 2025-10-09 09:58:43.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:44.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.213 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.214 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.214 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.214 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.214 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.215 2 INFO nova.compute.manager [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Terminating instance#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.216 2 DEBUG nova.compute.manager [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 09:58:45 compute-1 kernel: tap5fdcca80-23 (unregistering): left promiscuous mode
Oct  9 09:58:45 compute-1 NetworkManager[982]: <info>  [1760003925.2549] device (tap5fdcca80-23): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:45 compute-1 ovn_controller[62080]: 2025-10-09T09:58:45Z|00062|binding|INFO|Releasing lport 5fdcca80-237d-4123-b2d6-a46f90186d0b from this chassis (sb_readonly=0)
Oct  9 09:58:45 compute-1 ovn_controller[62080]: 2025-10-09T09:58:45Z|00063|binding|INFO|Setting lport 5fdcca80-237d-4123-b2d6-a46f90186d0b down in Southbound
Oct  9 09:58:45 compute-1 ovn_controller[62080]: 2025-10-09T09:58:45Z|00064|binding|INFO|Removing iface tap5fdcca80-23 ovn-installed in OVS
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.275 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:48:22 10.100.0.3'], port_security=['fa:16:3e:00:48:22 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e811a931-a3de-4684-8b2f-e916788f6ea9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ab824ab-8ac2-4d9c-9d6e-9bbdb4458228', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6496ebe5-cfc3-4a35-b1e6-27021c277fad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=5fdcca80-237d-4123-b2d6-a46f90186d0b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.277 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 5fdcca80-237d-4123-b2d6-a46f90186d0b in datapath 48ce5fca-3386-4b8a-82e2-88fc71a94881 unbound from our chassis#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.279 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48ce5fca-3386-4b8a-82e2-88fc71a94881, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.281 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9e429a86-a00d-4575-80cd-3c6c393e4d6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.283 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 namespace which is not needed anymore#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:45 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct  9 09:58:45 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 12.844s CPU time.
Oct  9 09:58:45 compute-1 systemd-machined[120683]: Machine qemu-3-instance-00000004 terminated.
Oct  9 09:58:45 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [NOTICE]   (167838) : haproxy version is 2.8.14-c23fe91
Oct  9 09:58:45 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [NOTICE]   (167838) : path to executable is /usr/sbin/haproxy
Oct  9 09:58:45 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [WARNING]  (167838) : Exiting Master process...
Oct  9 09:58:45 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [ALERT]    (167838) : Current worker (167840) exited with code 143 (Terminated)
Oct  9 09:58:45 compute-1 neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881[167834]: [WARNING]  (167838) : All workers exited. Exiting... (0)
Oct  9 09:58:45 compute-1 systemd[1]: libpod-50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f.scope: Deactivated successfully.
Oct  9 09:58:45 compute-1 podman[168210]: 2025-10-09 09:58:45.416311775 +0000 UTC m=+0.035098777 container died 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.447 2 INFO nova.virt.libvirt.driver [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Instance destroyed successfully.#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.448 2 DEBUG nova.objects.instance [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid e811a931-a3de-4684-8b2f-e916788f6ea9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:58:45 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f-userdata-shm.mount: Deactivated successfully.
Oct  9 09:58:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-596f16e2f74877f4af0737f9ce5c377193e245dbce014e179b5c36f8fe3efb0f-merged.mount: Deactivated successfully.
Oct  9 09:58:45 compute-1 podman[168210]: 2025-10-09 09:58:45.457296423 +0000 UTC m=+0.076083426 container cleanup 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.466 2 DEBUG nova.virt.libvirt.vif [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1899878609',display_name='tempest-TestNetworkBasicOps-server-1899878609',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1899878609',id=4,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMZm4C6LRAPtfAr5m77K3NqQxZMtrMltDZaOJjL5VWwqcCmgw5WghdaHagMLuObgYdNXZ08m9cLFMwpCyPUmMwXoTGjd15bkV3f92hF1qRvuScT4iCVTrgjr7uJ/wKpdPQ==',key_name='tempest-TestNetworkBasicOps-1948988860',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:57:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-u8otko1l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:57:50Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=e811a931-a3de-4684-8b2f-e916788f6ea9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.466 2 DEBUG nova.network.os_vif_util [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "address": "fa:16:3e:00:48:22", "network": {"id": "48ce5fca-3386-4b8a-82e2-88fc71a94881", "bridge": "br-int", "label": "tempest-network-smoke--1247128788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fdcca80-23", "ovs_interfaceid": "5fdcca80-237d-4123-b2d6-a46f90186d0b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.467 2 DEBUG nova.network.os_vif_util [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.467 2 DEBUG os_vif [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:45 compute-1 systemd[1]: libpod-conmon-50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f.scope: Deactivated successfully.
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.469 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fdcca80-23, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.477 2 INFO os_vif [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:48:22,bridge_name='br-int',has_traffic_filtering=True,id=5fdcca80-237d-4123-b2d6-a46f90186d0b,network=Network(48ce5fca-3386-4b8a-82e2-88fc71a94881),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fdcca80-23')#033[00m
Oct  9 09:58:45 compute-1 podman[168247]: 2025-10-09 09:58:45.521308034 +0000 UTC m=+0.033380638 container remove 50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.527 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[720d0179-5367-4419-bc10-84db61c90fe8]: (4, ('Thu Oct  9 09:58:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 (50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f)\n50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f\nThu Oct  9 09:58:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 (50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f)\n50297579b31382e92730304f2bba22f57706ac2a767f0dbe9a8ad87e8501a79f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.529 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[41999993-6852-4203-86e7-b77736f5d515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.530 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48ce5fca-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:45 compute-1 kernel: tap48ce5fca-30: left promiscuous mode
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.550 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[577a0fe5-1939-49da-b2ca-14e0d3a5a140]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.574 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9621c2-8272-4470-8542-caaed53d67f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.575 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[24ceb8ca-9a6f-49a3-afbc-c1007751a59b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.591 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[375f1e35-3fbb-4c61-8529-affd292f0a59]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 155316, 'reachable_time': 32465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 168277, 'error': None, 'target': 'ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:45 compute-1 systemd[1]: run-netns-ovnmeta\x2d48ce5fca\x2d3386\x2d4b8a\x2d82e2\x2d88fc71a94881.mount: Deactivated successfully.
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.596 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48ce5fca-3386-4b8a-82e2-88fc71a94881 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 09:58:45 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:58:45.596 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f9f56f-8bb5-4ee3-9be6-820b0e732077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.615 2 DEBUG nova.compute.manager [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-unplugged-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.615 2 DEBUG oslo_concurrency.lockutils [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.616 2 DEBUG oslo_concurrency.lockutils [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.616 2 DEBUG oslo_concurrency.lockutils [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.616 2 DEBUG nova.compute.manager [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] No waiting events found dispatching network-vif-unplugged-5fdcca80-237d-4123-b2d6-a46f90186d0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.617 2 DEBUG nova.compute.manager [req-57db1058-52c1-4f92-bd84-e45d5d330d53 req-042173fe-920f-4e47-b11f-0edb5ef22f14 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-unplugged-5fdcca80-237d-4123-b2d6-a46f90186d0b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.656 2 INFO nova.virt.libvirt.driver [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Deleting instance files /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9_del#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.657 2 INFO nova.virt.libvirt.driver [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Deletion of /var/lib/nova/instances/e811a931-a3de-4684-8b2f-e916788f6ea9_del complete#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.695 2 INFO nova.compute.manager [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.695 2 DEBUG oslo.service.loopingcall [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.695 2 DEBUG nova.compute.manager [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 09:58:45 compute-1 nova_compute[162974]: 2025-10-09 09:58:45.696 2 DEBUG nova.network.neutron [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 09:58:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:46.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.369 2 DEBUG nova.network.neutron [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.378 2 INFO nova.compute.manager [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Took 0.68 seconds to deallocate network for instance.#033[00m
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.409 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.409 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.447 2 DEBUG oslo_concurrency.processutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:58:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1068777579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.807 2 DEBUG oslo_concurrency.processutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.813 2 DEBUG nova.compute.provider_tree [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.830 2 DEBUG nova.scheduler.client.report [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.844 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.862 2 INFO nova.scheduler.client.report [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance e811a931-a3de-4684-8b2f-e916788f6ea9#033[00m
Oct  9 09:58:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:46.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:46 compute-1 nova_compute[162974]: 2025-10-09 09:58:46.906 2 DEBUG oslo_concurrency.lockutils [None req-1748ba80-4011-40be-aeb2-fba77a780837 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.675 2 DEBUG nova.compute.manager [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.676 2 DEBUG oslo_concurrency.lockutils [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.676 2 DEBUG oslo_concurrency.lockutils [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.676 2 DEBUG oslo_concurrency.lockutils [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "e811a931-a3de-4684-8b2f-e916788f6ea9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.676 2 DEBUG nova.compute.manager [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] No waiting events found dispatching network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.677 2 WARNING nova.compute.manager [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received unexpected event network-vif-plugged-5fdcca80-237d-4123-b2d6-a46f90186d0b for instance with vm_state deleted and task_state None.#033[00m
Oct  9 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.677 2 DEBUG nova.compute.manager [req-6264c00b-216d-4eb5-b2f2-a53c1179a367 req-6d3381dd-c195-4d4b-a9a7-2b9084d4a9a0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Received event network-vif-deleted-5fdcca80-237d-4123-b2d6-a46f90186d0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:58:47 compute-1 nova_compute[162974]: 2025-10-09 09:58:47.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:48.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:48 compute-1 podman[168302]: 2025-10-09 09:58:48.543241346 +0000 UTC m=+0.051489469 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:58:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:48.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:49 compute-1 nova_compute[162974]: 2025-10-09 09:58:49.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:49 compute-1 nova_compute[162974]: 2025-10-09 09:58:49.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:50.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:50 compute-1 nova_compute[162974]: 2025-10-09 09:58:50.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:50 compute-1 nova_compute[162974]: 2025-10-09 09:58:50.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:50.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.109 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.123 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.136 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.136 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.136 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.137 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.137 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:51 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:58:51 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/78441123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.492 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.741 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.742 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5028MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.742 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.743 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.786 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.786 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:58:51 compute-1 nova_compute[162974]: 2025-10-09 09:58:51.798 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:58:52 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4092825429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.152 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.160 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.176 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.199 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.200 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:52 compute-1 nova_compute[162974]: 2025-10-09 09:58:52.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:52.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:54.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.192 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.192 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.192 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.206 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.206 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:54 compute-1 nova_compute[162974]: 2025-10-09 09:58:54.207 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:54.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:55 compute-1 nova_compute[162974]: 2025-10-09 09:58:55.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:55 compute-1 nova_compute[162974]: 2025-10-09 09:58:55.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:55 compute-1 nova_compute[162974]: 2025-10-09 09:58:55.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:56.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:56.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:57 compute-1 podman[168396]: 2025-10-09 09:58:57.52624737 +0000 UTC m=+0.037421204 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:58:57 compute-1 podman[168397]: 2025-10-09 09:58:57.53628571 +0000 UTC m=+0.044827182 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2)
Oct  9 09:58:57 compute-1 nova_compute[162974]: 2025-10-09 09:58:57.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:58:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:58:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:58.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:00.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:00 compute-1 nova_compute[162974]: 2025-10-09 09:59:00.446 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760003925.4452076, e811a931-a3de-4684-8b2f-e916788f6ea9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:59:00 compute-1 nova_compute[162974]: 2025-10-09 09:59:00.446 2 INFO nova.compute.manager [-] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] VM Stopped (Lifecycle Event)#033[00m
Oct  9 09:59:00 compute-1 nova_compute[162974]: 2025-10-09 09:59:00.462 2 DEBUG nova.compute.manager [None req-f15cf48f-4f4c-4fd0-8b9a-edb2e6d9b290 - - - - - -] [instance: e811a931-a3de-4684-8b2f-e916788f6ea9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:59:00 compute-1 nova_compute[162974]: 2025-10-09 09:59:00.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:02.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:02 compute-1 nova_compute[162974]: 2025-10-09 09:59:02.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:59:02 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/552648784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:59:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:02.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:04.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:04.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:05 compute-1 nova_compute[162974]: 2025-10-09 09:59:05.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:06.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:07 compute-1 nova_compute[162974]: 2025-10-09 09:59:07.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:08.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:09 compute-1 podman[168437]: 2025-10-09 09:59:09.54508863 +0000 UTC m=+0.054754584 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  9 09:59:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:10.036 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:10.037 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:10.037 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:10.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:10 compute-1 nova_compute[162974]: 2025-10-09 09:59:10.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:10.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:11.124 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:59:11 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:11.124 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:59:11 compute-1 nova_compute[162974]: 2025-10-09 09:59:11.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:12.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:12 compute-1 nova_compute[162974]: 2025-10-09 09:59:12.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:12.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:14.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:14.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:15 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:15.125 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:15 compute-1 nova_compute[162974]: 2025-10-09 09:59:15.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:15 compute-1 podman[168594]: 2025-10-09 09:59:15.601165136 +0000 UTC m=+0.038247692 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:59:15 compute-1 podman[168611]: 2025-10-09 09:59:15.733753111 +0000 UTC m=+0.046895039 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 09:59:15 compute-1 podman[168594]: 2025-10-09 09:59:15.737060555 +0000 UTC m=+0.174143110 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  9 09:59:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:16 compute-1 podman[168691]: 2025-10-09 09:59:16.025901209 +0000 UTC m=+0.034908697 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:59:16 compute-1 podman[168691]: 2025-10-09 09:59:16.035850472 +0000 UTC m=+0.044857939 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:59:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:16.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:16 compute-1 podman[168801]: 2025-10-09 09:59:16.359202092 +0000 UTC m=+0.033990016 container exec 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:59:16 compute-1 podman[168801]: 2025-10-09 09:59:16.368905301 +0000 UTC m=+0.043693225 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 09:59:16 compute-1 podman[168854]: 2025-10-09 09:59:16.503759737 +0000 UTC m=+0.033578191 container exec 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, build-date=2023-02-22T09:23:20, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, version=2.2.4)
Oct  9 09:59:16 compute-1 podman[168854]: 2025-10-09 09:59:16.515850206 +0000 UTC m=+0.045668659 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, io.openshift.tags=Ceph keepalived, vcs-type=git, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vendor=Red Hat, Inc., version=2.2.4, description=keepalived for Ceph, io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20)
Oct  9 09:59:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:16.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:59:17 compute-1 nova_compute[162974]: 2025-10-09 09:59:17.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:18.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:18.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:19 compute-1 podman[168962]: 2025-10-09 09:59:19.547761005 +0000 UTC m=+0.053926824 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:59:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:20.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:20 compute-1 nova_compute[162974]: 2025-10-09 09:59:20.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:20.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:22.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:22 compute-1 nova_compute[162974]: 2025-10-09 09:59:22.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:22.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:24.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:24.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:25 compute-1 nova_compute[162974]: 2025-10-09 09:59:25.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:26.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:26 compute-1 ovn_controller[62080]: 2025-10-09T09:59:26Z|00065|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct  9 09:59:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:26.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:27 compute-1 nova_compute[162974]: 2025-10-09 09:59:27.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:28.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:28 compute-1 podman[169008]: 2025-10-09 09:59:28.539737377 +0000 UTC m=+0.046787939 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:59:28 compute-1 podman[169009]: 2025-10-09 09:59:28.544207172 +0000 UTC m=+0.050466572 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:59:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:28.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:30.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:30 compute-1 nova_compute[162974]: 2025-10-09 09:59:30.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:30.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:32.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:32 compute-1 nova_compute[162974]: 2025-10-09 09:59:32.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:32.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:34.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:34.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:35 compute-1 nova_compute[162974]: 2025-10-09 09:59:35.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:36.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:36.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:37 compute-1 nova_compute[162974]: 2025-10-09 09:59:37.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:38.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:38.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:40.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:40 compute-1 nova_compute[162974]: 2025-10-09 09:59:40.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:40 compute-1 podman[169072]: 2025-10-09 09:59:40.540500709 +0000 UTC m=+0.052463356 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:59:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:40.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.741 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.741 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.751 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.803 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.804 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.808 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.808 2 INFO nova.compute.claims [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  9 09:59:41 compute-1 nova_compute[162974]: 2025-10-09 09:59:41.870 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:42.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:42 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:59:42 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2641533928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.205 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.208 2 DEBUG nova.compute.provider_tree [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.219 2 DEBUG nova.scheduler.client.report [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.232 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.428s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.232 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.268 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.269 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.287 2 INFO nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.297 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.361 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.362 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.362 2 INFO nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Creating image(s)#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.379 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.399 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.415 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.418 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.464 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.464 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.465 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.465 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.483 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.485 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.637 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.682 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.743 2 DEBUG nova.objects.instance [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid c7e917a6-1f6f-4739-a31a-bdcfa52bf93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.755 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.755 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Ensure instance console log exists: /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.755 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.756 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.756 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:42 compute-1 nova_compute[162974]: 2025-10-09 09:59:42.920 2 DEBUG nova.policy [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 09:59:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:42.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:44 compute-1 nova_compute[162974]: 2025-10-09 09:59:44.115 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Successfully created port: 1687cc87-5c7d-4d91-9386-d985ccc5f55f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 09:59:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:44.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:44.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.030 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Successfully updated port: 1687cc87-5c7d-4d91-9386-d985ccc5f55f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.041 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.041 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.041 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.079 2 DEBUG nova.compute.manager [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-changed-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.079 2 DEBUG nova.compute.manager [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Refreshing instance network info cache due to event network-changed-1687cc87-5c7d-4d91-9386-d985ccc5f55f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.079 2 DEBUG oslo_concurrency.lockutils [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.132 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 09:59:45 compute-1 nova_compute[162974]: 2025-10-09 09:59:45.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:46.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.182 2 DEBUG nova.network.neutron [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updating instance_info_cache with network_info: [{"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.195 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.195 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Instance network_info: |[{"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.195 2 DEBUG oslo_concurrency.lockutils [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.195 2 DEBUG nova.network.neutron [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Refreshing network info cache for port 1687cc87-5c7d-4d91-9386-d985ccc5f55f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.197 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Start _get_guest_xml network_info=[{"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.200 2 WARNING nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.203 2 DEBUG nova.virt.libvirt.host [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.204 2 DEBUG nova.virt.libvirt.host [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.208 2 DEBUG nova.virt.libvirt.host [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.208 2 DEBUG nova.virt.libvirt.host [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.208 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.208 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.209 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.209 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.209 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.209 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.210 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.210 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.210 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.210 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.210 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.211 2 DEBUG nova.virt.hardware [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.212 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.380475) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986380494, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1556, "num_deletes": 250, "total_data_size": 3944363, "memory_usage": 3993752, "flush_reason": "Manual Compaction"}
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986386338, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1603308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23077, "largest_seqno": 24628, "table_properties": {"data_size": 1598171, "index_size": 2405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13335, "raw_average_key_size": 20, "raw_value_size": 1586997, "raw_average_value_size": 2456, "num_data_blocks": 104, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003856, "oldest_key_time": 1760003856, "file_creation_time": 1760003986, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 5890 microseconds, and 3645 cpu microseconds.
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386364) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1603308 bytes OK
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386374) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386857) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386867) EVENT_LOG_v1 {"time_micros": 1760003986386864, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386881) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3937097, prev total WAL file size 3937097, number of live WAL files 2.
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.387492) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1565KB)], [42(13MB)]
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986387514, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16204471, "oldest_snapshot_seqno": -1}
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5590 keys, 13065615 bytes, temperature: kUnknown
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986422943, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 13065615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13028786, "index_size": 21743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 140488, "raw_average_key_size": 25, "raw_value_size": 12927883, "raw_average_value_size": 2312, "num_data_blocks": 891, "num_entries": 5590, "num_filter_entries": 5590, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760003986, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.423092) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 13065615 bytes
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.423574) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 456.7 rd, 368.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.9 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(18.3) write-amplify(8.1) OK, records in: 6048, records dropped: 458 output_compression: NoCompression
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.423590) EVENT_LOG_v1 {"time_micros": 1760003986423583, "job": 24, "event": "compaction_finished", "compaction_time_micros": 35478, "compaction_time_cpu_micros": 19544, "output_level": 6, "num_output_files": 1, "total_output_size": 13065615, "num_input_records": 6048, "num_output_records": 5590, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986423849, "job": 24, "event": "table_file_deletion", "file_number": 44}
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986425305, "job": 24, "event": "table_file_deletion", "file_number": 42}
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.387448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.425324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.425327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.425328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.425329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-09:59:46.425330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:59:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3422652302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.572 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.591 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.593 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:59:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2796860595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.932 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.934 2 DEBUG nova.virt.libvirt.vif [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:59:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1252166476',display_name='tempest-TestNetworkBasicOps-server-1252166476',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1252166476',id=7,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAbgn6SPIFM6AGarUubqFoimfuOdsNeRWX5sq4kHFgr7hG7is5Q/Q8Ek3R1Q0esxFqFL7X0+gBaYCim0P8OY9cMbX9okJGNQoFkk0zy9ycrfeQthKDNu+tA50E3TW/m2Ww==',key_name='tempest-TestNetworkBasicOps-1380098384',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-a0fec6m8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:59:42Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7e917a6-1f6f-4739-a31a-bdcfa52bf93b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.934 2 DEBUG nova.network.os_vif_util [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.935 2 DEBUG nova.network.os_vif_util [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.936 2 DEBUG nova.objects.instance [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid c7e917a6-1f6f-4739-a31a-bdcfa52bf93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:59:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:46.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.947 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] End _get_guest_xml xml=<domain type="kvm">
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <uuid>c7e917a6-1f6f-4739-a31a-bdcfa52bf93b</uuid>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <name>instance-00000007</name>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <memory>131072</memory>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <vcpu>1</vcpu>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <metadata>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <nova:name>tempest-TestNetworkBasicOps-server-1252166476</nova:name>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <nova:creationTime>2025-10-09 09:59:46</nova:creationTime>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <nova:flavor name="m1.nano">
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <nova:memory>128</nova:memory>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <nova:disk>1</nova:disk>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <nova:swap>0</nova:swap>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <nova:vcpus>1</nova:vcpus>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      </nova:flavor>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <nova:owner>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      </nova:owner>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <nova:ports>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <nova:port uuid="1687cc87-5c7d-4d91-9386-d985ccc5f55f">
Oct  9 09:59:46 compute-1 nova_compute[162974]:          <nova:ip type="fixed" address="10.100.0.22" ipVersion="4"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        </nova:port>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      </nova:ports>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    </nova:instance>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  </metadata>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <sysinfo type="smbios">
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <system>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <entry name="manufacturer">RDO</entry>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <entry name="product">OpenStack Compute</entry>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <entry name="serial">c7e917a6-1f6f-4739-a31a-bdcfa52bf93b</entry>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <entry name="uuid">c7e917a6-1f6f-4739-a31a-bdcfa52bf93b</entry>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <entry name="family">Virtual Machine</entry>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    </system>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  </sysinfo>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <os>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <boot dev="hd"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <smbios mode="sysinfo"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  </os>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <features>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <acpi/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <apic/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <vmcoreinfo/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  </features>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <clock offset="utc">
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <timer name="hpet" present="no"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  </clock>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <cpu mode="host-model" match="exact">
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  </cpu>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  <devices>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <disk type="network" device="disk">
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk">
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      </source>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <target dev="vda" bus="virtio"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <disk type="network" device="cdrom">
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config">
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      </source>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 09:59:46 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      </auth>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <target dev="sda" bus="sata"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    </disk>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <interface type="ethernet">
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <mac address="fa:16:3e:18:30:66"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <mtu size="1442"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <target dev="tap1687cc87-5c"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    </interface>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <serial type="pty">
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <log file="/var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/console.log" append="off"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    </serial>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <video>
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    </video>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <input type="tablet" bus="usb"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <rng model="virtio">
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <backend model="random">/dev/urandom</backend>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    </rng>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <controller type="usb" index="0"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    <memballoon model="virtio">
Oct  9 09:59:46 compute-1 nova_compute[162974]:      <stats period="10"/>
Oct  9 09:59:46 compute-1 nova_compute[162974]:    </memballoon>
Oct  9 09:59:46 compute-1 nova_compute[162974]:  </devices>
Oct  9 09:59:46 compute-1 nova_compute[162974]: </domain>
Oct  9 09:59:46 compute-1 nova_compute[162974]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.949 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Preparing to wait for external event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.949 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.950 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.950 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.950 2 DEBUG nova.virt.libvirt.vif [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:59:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1252166476',display_name='tempest-TestNetworkBasicOps-server-1252166476',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1252166476',id=7,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAbgn6SPIFM6AGarUubqFoimfuOdsNeRWX5sq4kHFgr7hG7is5Q/Q8Ek3R1Q0esxFqFL7X0+gBaYCim0P8OY9cMbX9okJGNQoFkk0zy9ycrfeQthKDNu+tA50E3TW/m2Ww==',key_name='tempest-TestNetworkBasicOps-1380098384',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-a0fec6m8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:59:42Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7e917a6-1f6f-4739-a31a-bdcfa52bf93b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.951 2 DEBUG nova.network.os_vif_util [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.951 2 DEBUG nova.network.os_vif_util [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.951 2 DEBUG os_vif [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.952 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.953 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1687cc87-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.955 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1687cc87-5c, col_values=(('external_ids', {'iface-id': '1687cc87-5c7d-4d91-9386-d985ccc5f55f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:30:66', 'vm-uuid': 'c7e917a6-1f6f-4739-a31a-bdcfa52bf93b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:46 compute-1 NetworkManager[982]: <info>  [1760003986.9573] manager: (tap1687cc87-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.962 2 INFO os_vif [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c')#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.989 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.989 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.989 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:18:30:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:59:46 compute-1 nova_compute[162974]: 2025-10-09 09:59:46.990 2 INFO nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Using config drive#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.007 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.225 2 INFO nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Creating config drive at /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.229 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpieyfwj9p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.348 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpieyfwj9p" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.368 2 DEBUG nova.storage.rbd_utils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.370 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.450 2 DEBUG oslo_concurrency.processutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.451 2 INFO nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Deleting local config drive /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b/disk.config because it was imported into RBD.#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.480 2 DEBUG nova.network.neutron [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updated VIF entry in instance network info cache for port 1687cc87-5c7d-4d91-9386-d985ccc5f55f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.480 2 DEBUG nova.network.neutron [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updating instance_info_cache with network_info: [{"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:59:47 compute-1 kernel: tap1687cc87-5c: entered promiscuous mode
Oct  9 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.4855] manager: (tap1687cc87-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-1 ovn_controller[62080]: 2025-10-09T09:59:47Z|00066|binding|INFO|Claiming lport 1687cc87-5c7d-4d91-9386-d985ccc5f55f for this chassis.
Oct  9 09:59:47 compute-1 ovn_controller[62080]: 2025-10-09T09:59:47Z|00067|binding|INFO|1687cc87-5c7d-4d91-9386-d985ccc5f55f: Claiming fa:16:3e:18:30:66 10.100.0.22
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.492 2 DEBUG oslo_concurrency.lockutils [req-2af37082-dcb8-42f7-af86-344ced720841 req-62f12eed-43ca-4385-9963-87dbc5d6703e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.495 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:30:66 10.100.0.22'], port_security=['fa:16:3e:18:30:66 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'c7e917a6-1f6f-4739-a31a-bdcfa52bf93b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '405a5985-622d-4a01-bebe-dd3a8833c5f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a09146a-9f3c-432d-a7ac-1e34c91ed6bf, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=1687cc87-5c7d-4d91-9386-d985ccc5f55f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.496 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 1687cc87-5c7d-4d91-9386-d985ccc5f55f in datapath 4f792301-cf2d-455d-8ad6-8a55cc3146e9 bound to our chassis#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.497 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f792301-cf2d-455d-8ad6-8a55cc3146e9#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.505 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[ff46ab3c-1667-46fb-9594-45291a3f7aec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.506 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f792301-c1 in ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.507 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f792301-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.507 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[ee856aa3-12df-444c-b209-473bc7426bd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.508 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[d251430a-8962-4f32-80df-ed4ef5e7bfa6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 systemd-udevd[169423]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:59:47 compute-1 systemd-machined[120683]: New machine qemu-4-instance-00000007.
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.517 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[03497ea6-4399-4957-af71-05f8e9ab2733]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.5243] device (tap1687cc87-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.5249] device (tap1687cc87-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 09:59:47 compute-1 systemd[1]: Started Virtual Machine qemu-4-instance-00000007.
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.533 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[5d6d281c-ad7f-44df-8211-c9a1b5f900b1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_controller[62080]: 2025-10-09T09:59:47Z|00068|binding|INFO|Setting lport 1687cc87-5c7d-4d91-9386-d985ccc5f55f ovn-installed in OVS
Oct  9 09:59:47 compute-1 ovn_controller[62080]: 2025-10-09T09:59:47Z|00069|binding|INFO|Setting lport 1687cc87-5c7d-4d91-9386-d985ccc5f55f up in Southbound
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.556 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[fc497323-cb1e-483a-b72e-8720c5acb76d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 systemd-udevd[169426]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.5603] manager: (tap4f792301-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.561 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[22fd40c4-eb3c-4e44-a627-53b755554b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.583 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[9dcd78e6-ab1c-4915-9b3c-75bd1e4b737a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.585 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[491f4249-f741-4872-8ed3-ce887b90a60e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.6007] device (tap4f792301-c0): carrier: link connected
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.604 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[cab0e93a-342a-4b35-9a74-6e4e45166a53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.616 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a79aa631-31d6-486e-ad7b-527ecd5cba74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f792301-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:7e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 167116, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 169446, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.627 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[35907a25-39ac-4f05-b0e7-dbdf41f819e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:7e66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 167116, 'tstamp': 167116}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 169447, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.639 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[84b4e740-0408-4a58-a4fb-ecb7a29bf6b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f792301-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:7e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 167116, 'reachable_time': 27157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 169448, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.660 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[66bc3cf1-0e5d-4b22-8f08-dd3b0e9e7bee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.699 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[38334e7f-2c8e-4623-8504-8b9da05e19cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.700 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f792301-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.700 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.700 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f792301-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-1 NetworkManager[982]: <info>  [1760003987.7029] manager: (tap4f792301-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct  9 09:59:47 compute-1 kernel: tap4f792301-c0: entered promiscuous mode
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.706 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f792301-c0, col_values=(('external_ids', {'iface-id': '704a96af-9e0f-4b61-9b53-029cbdc713e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:47 compute-1 ovn_controller[62080]: 2025-10-09T09:59:47Z|00070|binding|INFO|Releasing lport 704a96af-9e0f-4b61-9b53-029cbdc713e8 from this chassis (sb_readonly=0)
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.709 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.710 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[167bb61f-16b1-4721-95e1-4da06ccfbb5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.710 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: global
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    log         /dev/log local0 debug
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    log-tag     haproxy-metadata-proxy-4f792301-cf2d-455d-8ad6-8a55cc3146e9
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    user        root
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    group       root
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    maxconn     1024
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    pidfile     /var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    daemon
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: defaults
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    log global
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    mode http
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    option httplog
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    option dontlognull
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    option http-server-close
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    option forwardfor
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    retries                 3
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    timeout http-request    30s
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    timeout connect         30s
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    timeout client          32s
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    timeout server          32s
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    timeout http-keep-alive 30s
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: listen listener
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    bind 169.254.169.254:80
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]:    http-request add-header X-OVN-Network-ID 4f792301-cf2d-455d-8ad6-8a55cc3146e9
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 09:59:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 09:59:47.710 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'env', 'PROCESS_TAG=haproxy-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f792301-cf2d-455d-8ad6-8a55cc3146e9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.894 2 DEBUG nova.compute.manager [req-e828d4ab-0fc7-457f-8202-31398ad5fb38 req-23c6e4ef-b1b2-4d30-a32d-294db4adb2f0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.895 2 DEBUG oslo_concurrency.lockutils [req-e828d4ab-0fc7-457f-8202-31398ad5fb38 req-23c6e4ef-b1b2-4d30-a32d-294db4adb2f0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.895 2 DEBUG oslo_concurrency.lockutils [req-e828d4ab-0fc7-457f-8202-31398ad5fb38 req-23c6e4ef-b1b2-4d30-a32d-294db4adb2f0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.895 2 DEBUG oslo_concurrency.lockutils [req-e828d4ab-0fc7-457f-8202-31398ad5fb38 req-23c6e4ef-b1b2-4d30-a32d-294db4adb2f0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:47 compute-1 nova_compute[162974]: 2025-10-09 09:59:47.896 2 DEBUG nova.compute.manager [req-e828d4ab-0fc7-457f-8202-31398ad5fb38 req-23c6e4ef-b1b2-4d30-a32d-294db4adb2f0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Processing event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 09:59:48 compute-1 podman[169518]: 2025-10-09 09:59:48.070421041 +0000 UTC m=+0.046297464 container create 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct  9 09:59:48 compute-1 systemd[1]: Started libpod-conmon-1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5.scope.
Oct  9 09:59:48 compute-1 systemd[1]: Started libcrun container.
Oct  9 09:59:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002f1b4380ec721ede1d5a9d03dece1813f418232c10bc6615e335bd489013e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:48.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:48 compute-1 podman[169518]: 2025-10-09 09:59:48.134385323 +0000 UTC m=+0.110261765 container init 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  9 09:59:48 compute-1 podman[169518]: 2025-10-09 09:59:48.139778458 +0000 UTC m=+0.115654880 container start 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 09:59:48 compute-1 podman[169518]: 2025-10-09 09:59:48.052639727 +0000 UTC m=+0.028516159 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:59:48 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [NOTICE]   (169534) : New worker (169536) forked
Oct  9 09:59:48 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [NOTICE]   (169534) : Loading success.
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.220 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.223 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003988.2228212, c7e917a6-1f6f-4739-a31a-bdcfa52bf93b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.223 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] VM Started (Lifecycle Event)#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.225 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.228 2 INFO nova.virt.libvirt.driver [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Instance spawned successfully.#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.228 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.242 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.245 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.249 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.249 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.249 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.250 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.250 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.251 2 DEBUG nova.virt.libvirt.driver [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.268 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.268 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003988.2232885, c7e917a6-1f6f-4739-a31a-bdcfa52bf93b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.268 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] VM Paused (Lifecycle Event)#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.286 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.289 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760003988.2233796, c7e917a6-1f6f-4739-a31a-bdcfa52bf93b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.289 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] VM Resumed (Lifecycle Event)#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.295 2 INFO nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Took 5.93 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.296 2 DEBUG nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.303 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.305 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.323 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.340 2 INFO nova.compute.manager [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Took 6.56 seconds to build instance.#033[00m
Oct  9 09:59:48 compute-1 nova_compute[162974]: 2025-10-09 09:59:48.349 2 DEBUG oslo_concurrency.lockutils [None req-8f3c6407-a615-4702-afbf-2bdb3f177b1a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.952 2 DEBUG nova.compute.manager [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.952 2 DEBUG oslo_concurrency.lockutils [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.952 2 DEBUG oslo_concurrency.lockutils [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.952 2 DEBUG oslo_concurrency.lockutils [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.953 2 DEBUG nova.compute.manager [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] No waiting events found dispatching network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:59:49 compute-1 nova_compute[162974]: 2025-10-09 09:59:49.953 2 WARNING nova.compute.manager [req-aa8a951d-4d2a-4dc7-a6e3-1547ada4b715 req-e207e064-62d4-494a-92b5-99e687d5e4a2 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received unexpected event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f for instance with vm_state active and task_state None.#033[00m
Oct  9 09:59:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:50.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:50 compute-1 podman[169542]: 2025-10-09 09:59:50.553010745 +0000 UTC m=+0.061697650 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  9 09:59:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:51 compute-1 nova_compute[162974]: 2025-10-09 09:59:51.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:51 compute-1 nova_compute[162974]: 2025-10-09 09:59:51.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.130 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.130 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.130 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:52.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:59:52 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1780233837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.498 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.548 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.548 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.770 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.770 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4873MB free_disk=59.92177200317383GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.771 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.771 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.828 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance c7e917a6-1f6f-4739-a31a-bdcfa52bf93b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.828 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.829 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:59:52 compute-1 nova_compute[162974]: 2025-10-09 09:59:52.852 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:52.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:59:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3096639622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:59:53 compute-1 nova_compute[162974]: 2025-10-09 09:59:53.228 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:53 compute-1 nova_compute[162974]: 2025-10-09 09:59:53.232 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:59:53 compute-1 nova_compute[162974]: 2025-10-09 09:59:53.243 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:59:53 compute-1 nova_compute[162974]: 2025-10-09 09:59:53.264 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:59:53 compute-1 nova_compute[162974]: 2025-10-09 09:59:53.264 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:54.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.265 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.265 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.265 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:59:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.758 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.758 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquired lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.758 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  9 09:59:55 compute-1 nova_compute[162974]: 2025-10-09 09:59:55.758 2 DEBUG nova.objects.instance [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c7e917a6-1f6f-4739-a31a-bdcfa52bf93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:59:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:56.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:56.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:56 compute-1 nova_compute[162974]: 2025-10-09 09:59:56.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.096 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updating instance_info_cache with network_info: [{"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.110 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Releasing lock "refresh_cache-c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.110 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  9 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.111 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.111 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.111 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:57 compute-1 nova_compute[162974]: 2025-10-09 09:59:57.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:58.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 09:59:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:58.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:59 compute-1 podman[169635]: 2025-10-09 09:59:59.551255341 +0000 UTC m=+0.053905053 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct  9 09:59:59 compute-1 podman[169634]: 2025-10-09 09:59:59.573302065 +0000 UTC m=+0.075911642 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 10:00:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:00.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:00 compute-1 ovn_controller[62080]: 2025-10-09T10:00:00Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:30:66 10.100.0.22
Oct  9 10:00:00 compute-1 ovn_controller[62080]: 2025-10-09T10:00:00Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:30:66 10.100.0.22
Oct  9 10:00:00 compute-1 ceph-mon[9795]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 10:00:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:00.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:01 compute-1 nova_compute[162974]: 2025-10-09 10:00:01.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:02.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:02 compute-1 nova_compute[162974]: 2025-10-09 10:00:02.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:02.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:04.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:04.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:06.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:06 compute-1 nova_compute[162974]: 2025-10-09 10:00:06.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:06.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:07 compute-1 nova_compute[162974]: 2025-10-09 10:00:07.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:08.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:08.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:10.038 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:10.039 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:10.039 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:00:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:10.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:00:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:10.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:11 compute-1 systemd[1]: Starting system activity accounting tool...
Oct  9 10:00:11 compute-1 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 10:00:11 compute-1 systemd[1]: Finished system activity accounting tool.
Oct  9 10:00:11 compute-1 podman[169675]: 2025-10-09 10:00:11.578511691 +0000 UTC m=+0.070135163 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  9 10:00:11 compute-1 nova_compute[162974]: 2025-10-09 10:00:11.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:12.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:12 compute-1 nova_compute[162974]: 2025-10-09 10:00:12.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:12.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:14.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:14.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:16.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:16 compute-1 nova_compute[162974]: 2025-10-09 10:00:16.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:16.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:17 compute-1 nova_compute[162974]: 2025-10-09 10:00:17.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:00:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:18.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:00:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:18.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:20.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:20.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.394548) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021394609, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 589, "num_deletes": 257, "total_data_size": 926082, "memory_usage": 939096, "flush_reason": "Manual Compaction"}
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021398488, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 609209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24633, "largest_seqno": 25217, "table_properties": {"data_size": 606288, "index_size": 893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6684, "raw_average_key_size": 17, "raw_value_size": 600364, "raw_average_value_size": 1592, "num_data_blocks": 41, "num_entries": 377, "num_filter_entries": 377, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003987, "oldest_key_time": 1760003987, "file_creation_time": 1760004021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 3955 microseconds, and 3238 cpu microseconds.
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398516) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 609209 bytes OK
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398529) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398920) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398934) EVENT_LOG_v1 {"time_micros": 1760004021398930, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398956) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 922698, prev total WAL file size 922698, number of live WAL files 2.
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.399271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(594KB)], [45(12MB)]
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021399301, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 13674824, "oldest_snapshot_seqno": -1}
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5445 keys, 13531452 bytes, temperature: kUnknown
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021436576, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13531452, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13494739, "index_size": 22011, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 138681, "raw_average_key_size": 25, "raw_value_size": 13395532, "raw_average_value_size": 2460, "num_data_blocks": 899, "num_entries": 5445, "num_filter_entries": 5445, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437023) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13531452 bytes
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 364.2 rd, 360.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.5 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(44.7) write-amplify(22.2) OK, records in: 5967, records dropped: 522 output_compression: NoCompression
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437585) EVENT_LOG_v1 {"time_micros": 1760004021437577, "job": 26, "event": "compaction_finished", "compaction_time_micros": 37550, "compaction_time_cpu_micros": 23481, "output_level": 6, "num_output_files": 1, "total_output_size": 13531452, "num_input_records": 5967, "num_output_records": 5445, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021438149, "job": 26, "event": "table_file_deletion", "file_number": 47}
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021440502, "job": 26, "event": "table_file_deletion", "file_number": 45}
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.399233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.440731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.440736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.440737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.440739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:00:21.440740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-1 podman[169753]: 2025-10-09 10:00:21.527202854 +0000 UTC m=+0.050733695 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:00:21 compute-1 nova_compute[162974]: 2025-10-09 10:00:21.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:22.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:22 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:00:22 compute-1 nova_compute[162974]: 2025-10-09 10:00:22.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:24.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:24.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.002000018s ======
Oct  9 10:00:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:26.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000018s
Oct  9 10:00:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:26 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:26 compute-1 nova_compute[162974]: 2025-10-09 10:00:26.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:00:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:26.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:00:27 compute-1 nova_compute[162974]: 2025-10-09 10:00:27.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:28.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:28.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:30 compute-1 podman[169854]: 2025-10-09 10:00:30.540438252 +0000 UTC m=+0.042933747 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:00:30 compute-1 podman[169855]: 2025-10-09 10:00:30.548070455 +0000 UTC m=+0.048955996 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:00:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:31.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:31 compute-1 nova_compute[162974]: 2025-10-09 10:00:31.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:00:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:32.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:00:32 compute-1 nova_compute[162974]: 2025-10-09 10:00:32.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:33.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:34.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:35.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:36.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:36 compute-1 nova_compute[162974]: 2025-10-09 10:00:36.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:37.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:37 compute-1 nova_compute[162974]: 2025-10-09 10:00:37.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:38.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:38 compute-1 ovn_controller[62080]: 2025-10-09T10:00:38Z|00071|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Oct  9 10:00:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:39.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:40.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:41.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:41 compute-1 nova_compute[162974]: 2025-10-09 10:00:41.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:42.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:42 compute-1 podman[169918]: 2025-10-09 10:00:42.570881256 +0000 UTC m=+0.070888647 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller)
Oct  9 10:00:42 compute-1 nova_compute[162974]: 2025-10-09 10:00:42.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:00:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:43.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:00:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:44.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:45.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:00:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:46.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.628 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.628 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.628 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.628 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.629 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.630 2 INFO nova.compute.manager [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Terminating instance#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.631 2 DEBUG nova.compute.manager [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 10:00:46 compute-1 kernel: tap1687cc87-5c (unregistering): left promiscuous mode
Oct  9 10:00:46 compute-1 NetworkManager[982]: <info>  [1760004046.6733] device (tap1687cc87-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:00:46 compute-1 ovn_controller[62080]: 2025-10-09T10:00:46Z|00072|binding|INFO|Releasing lport 1687cc87-5c7d-4d91-9386-d985ccc5f55f from this chassis (sb_readonly=0)
Oct  9 10:00:46 compute-1 ovn_controller[62080]: 2025-10-09T10:00:46Z|00073|binding|INFO|Setting lport 1687cc87-5c7d-4d91-9386-d985ccc5f55f down in Southbound
Oct  9 10:00:46 compute-1 ovn_controller[62080]: 2025-10-09T10:00:46Z|00074|binding|INFO|Removing iface tap1687cc87-5c ovn-installed in OVS
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.691 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:30:66 10.100.0.22'], port_security=['fa:16:3e:18:30:66 10.100.0.22'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.22/28', 'neutron:device_id': 'c7e917a6-1f6f-4739-a31a-bdcfa52bf93b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '405a5985-622d-4a01-bebe-dd3a8833c5f3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a09146a-9f3c-432d-a7ac-1e34c91ed6bf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=1687cc87-5c7d-4d91-9386-d985ccc5f55f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.692 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 1687cc87-5c7d-4d91-9386-d985ccc5f55f in datapath 4f792301-cf2d-455d-8ad6-8a55cc3146e9 unbound from our chassis#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.693 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f792301-cf2d-455d-8ad6-8a55cc3146e9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.693 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[83093879-7618-486a-820d-c52da400738e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.694 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 namespace which is not needed anymore#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:46 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct  9 10:00:46 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000007.scope: Consumed 12.306s CPU time.
Oct  9 10:00:46 compute-1 systemd-machined[120683]: Machine qemu-4-instance-00000007 terminated.
Oct  9 10:00:46 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [NOTICE]   (169534) : haproxy version is 2.8.14-c23fe91
Oct  9 10:00:46 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [NOTICE]   (169534) : path to executable is /usr/sbin/haproxy
Oct  9 10:00:46 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [ALERT]    (169534) : Current worker (169536) exited with code 143 (Terminated)
Oct  9 10:00:46 compute-1 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169530]: [WARNING]  (169534) : All workers exited. Exiting... (0)
Oct  9 10:00:46 compute-1 systemd[1]: libpod-1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5.scope: Deactivated successfully.
Oct  9 10:00:46 compute-1 podman[169964]: 2025-10-09 10:00:46.805963231 +0000 UTC m=+0.039075898 container died 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  9 10:00:46 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5-userdata-shm.mount: Deactivated successfully.
Oct  9 10:00:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-002f1b4380ec721ede1d5a9d03dece1813f418232c10bc6615e335bd489013e6-merged.mount: Deactivated successfully.
Oct  9 10:00:46 compute-1 podman[169964]: 2025-10-09 10:00:46.833512396 +0000 UTC m=+0.066625062 container cleanup 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 10:00:46 compute-1 kernel: tap1687cc87-5c: entered promiscuous mode
Oct  9 10:00:46 compute-1 systemd-udevd[169949]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:00:46 compute-1 NetworkManager[982]: <info>  [1760004046.8501] manager: (tap1687cc87-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct  9 10:00:46 compute-1 kernel: tap1687cc87-5c (unregistering): left promiscuous mode
Oct  9 10:00:46 compute-1 systemd[1]: libpod-conmon-1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5.scope: Deactivated successfully.
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.866 2 INFO nova.virt.libvirt.driver [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Instance destroyed successfully.#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.866 2 DEBUG nova.objects.instance [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid c7e917a6-1f6f-4739-a31a-bdcfa52bf93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.880 2 DEBUG nova.virt.libvirt.vif [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1252166476',display_name='tempest-TestNetworkBasicOps-server-1252166476',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1252166476',id=7,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAbgn6SPIFM6AGarUubqFoimfuOdsNeRWX5sq4kHFgr7hG7is5Q/Q8Ek3R1Q0esxFqFL7X0+gBaYCim0P8OY9cMbX9okJGNQoFkk0zy9ycrfeQthKDNu+tA50E3TW/m2Ww==',key_name='tempest-TestNetworkBasicOps-1380098384',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:48Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-a0fec6m8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:48Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7e917a6-1f6f-4739-a31a-bdcfa52bf93b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.884 2 DEBUG nova.network.os_vif_util [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "address": "fa:16:3e:18:30:66", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.22", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1687cc87-5c", "ovs_interfaceid": "1687cc87-5c7d-4d91-9386-d985ccc5f55f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.885 2 DEBUG nova.network.os_vif_util [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.887 2 DEBUG os_vif [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1687cc87-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:46 compute-1 podman[169988]: 2025-10-09 10:00:46.893579309 +0000 UTC m=+0.036669002 container remove 1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.896 2 INFO os_vif [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:30:66,bridge_name='br-int',has_traffic_filtering=True,id=1687cc87-5c7d-4d91-9386-d985ccc5f55f,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1687cc87-5c')#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.899 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[5efc2e3e-b34c-407b-84c4-164b8d86ce27]: (4, ('Thu Oct  9 10:00:46 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 (1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5)\n1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5\nThu Oct  9 10:00:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 (1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5)\n1ad859c1f1bc0f2cc7bf7204a846bc9471f98934f862167209006dcb6bde23f5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.901 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[de2c074e-d004-4748-adbc-e5e1cb7e9a30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.902 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f792301-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:00:46 compute-1 kernel: tap4f792301-c0: left promiscuous mode
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:46 compute-1 nova_compute[162974]: 2025-10-09 10:00:46.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.919 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9491962c-9b0b-4b90-922f-48961c557f0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.941 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[985a402c-92f6-46b9-a92a-2ac36c0cbf92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.941 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[24419224-438b-46c5-b656-caad86b2461b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.956 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ca7a31-20cf-4dd4-97a2-0ba373318752]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 167112, 'reachable_time': 33951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 170023, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.959 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 10:00:46 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:46.959 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[9d36cadf-3c65-43c2-8f04-11dd0d5a36d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:46 compute-1 systemd[1]: run-netns-ovnmeta\x2d4f792301\x2dcf2d\x2d455d\x2d8ad6\x2d8a55cc3146e9.mount: Deactivated successfully.
Oct  9 10:00:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:47.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.064 2 INFO nova.virt.libvirt.driver [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Deleting instance files /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_del#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.064 2 INFO nova.virt.libvirt.driver [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Deletion of /var/lib/nova/instances/c7e917a6-1f6f-4739-a31a-bdcfa52bf93b_del complete#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.101 2 INFO nova.compute.manager [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.104 2 DEBUG oslo.service.loopingcall [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.104 2 DEBUG nova.compute.manager [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.104 2 DEBUG nova.network.neutron [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.283 2 DEBUG nova.compute.manager [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-unplugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.283 2 DEBUG oslo_concurrency.lockutils [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.284 2 DEBUG oslo_concurrency.lockutils [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.284 2 DEBUG oslo_concurrency.lockutils [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.284 2 DEBUG nova.compute.manager [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] No waiting events found dispatching network-vif-unplugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.284 2 DEBUG nova.compute.manager [req-9360a9d0-d976-4a12-b053-7df5b7dc5c17 req-f285397e-de21-4eae-bbf5-fbf81d36c756 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-unplugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:47.451 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:00:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:47.452 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.556 2 DEBUG nova.network.neutron [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.565 2 INFO nova.compute.manager [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Took 0.46 seconds to deallocate network for instance.#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.592 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.593 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.627 2 DEBUG oslo_concurrency.processutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:00:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3430617062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:00:47 compute-1 nova_compute[162974]: 2025-10-09 10:00:47.998 2 DEBUG oslo_concurrency.processutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.002 2 DEBUG nova.compute.provider_tree [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.014 2 DEBUG nova.scheduler.client.report [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.029 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.046 2 INFO nova.scheduler.client.report [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance c7e917a6-1f6f-4739-a31a-bdcfa52bf93b#033[00m
Oct  9 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.088 2 DEBUG oslo_concurrency.lockutils [None req-d39975fa-5983-468a-a04d-ec9b8a2a7811 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:48 compute-1 nova_compute[162974]: 2025-10-09 10:00:48.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  9 10:00:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:48.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:49.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.362 2 DEBUG nova.compute.manager [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.362 2 DEBUG oslo_concurrency.lockutils [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.362 2 DEBUG oslo_concurrency.lockutils [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.363 2 DEBUG oslo_concurrency.lockutils [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7e917a6-1f6f-4739-a31a-bdcfa52bf93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.363 2 DEBUG nova.compute.manager [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] No waiting events found dispatching network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.363 2 WARNING nova.compute.manager [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received unexpected event network-vif-plugged-1687cc87-5c7d-4d91-9386-d985ccc5f55f for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:00:49 compute-1 nova_compute[162974]: 2025-10-09 10:00:49.363 2 DEBUG nova.compute.manager [req-6f4ec08a-a57e-4550-8549-9826d2643ad2 req-624b7f1d-d86f-4166-8d5f-fe1af0272d64 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Received event network-vif-deleted-1687cc87-5c7d-4d91-9386-d985ccc5f55f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:50.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.117 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.130 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.130 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  9 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.142 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  9 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:51 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:51.454 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:00:51 compute-1 nova_compute[162974]: 2025-10-09 10:00:51.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.143 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.143 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.143 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.143 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.143 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:00:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:52.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:52 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:00:52 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/743125290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.492 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:00:52 compute-1 podman[170093]: 2025-10-09 10:00:52.565306416 +0000 UTC m=+0.075658134 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.705 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.706 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5058MB free_disk=59.942501068115234GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.706 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.706 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.753 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.753 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.844 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing inventories for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.901 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Updating ProviderTree inventory for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.901 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.917 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing aggregate associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.935 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing trait associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, traits: HW_CPU_X86_AESNI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX512VAES,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  9 10:00:52 compute-1 nova_compute[162974]: 2025-10-09 10:00:52.952 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:00:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:53.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:00:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3841926323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.301 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.305 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.316 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.330 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.330 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:53 compute-1 nova_compute[162974]: 2025-10-09 10:00:53.330 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:54.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:54 compute-1 nova_compute[162974]: 2025-10-09 10:00:54.338 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:54 compute-1 nova_compute[162974]: 2025-10-09 10:00:54.339 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:54 compute-1 nova_compute[162974]: 2025-10-09 10:00:54.339 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:54 compute-1 nova_compute[162974]: 2025-10-09 10:00:54.340 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:00:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:55.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.132 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.132 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.132 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:55 compute-1 nova_compute[162974]: 2025-10-09 10:00:55.132 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:56.180 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:89:5b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed655dd9-bb73-453e-8a8b-a0dd965263b3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=188102c6-f5ba-4733-92be-2659db7ae55a) old=Port_Binding(mac=['fa:16:3e:77:89:5b 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:00:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:56.181 71059 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 188102c6-f5ba-4733-92be-2659db7ae55a in datapath ab21f371-26e2-4c4f-bba0-3c44fb308723 updated#033[00m
Oct  9 10:00:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:56.182 71059 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ab21f371-26e2-4c4f-bba0-3c44fb308723 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  9 10:00:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:00:56.183 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6621b993-f94c-4cec-842b-6a089c2773a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:56.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:56 compute-1 nova_compute[162974]: 2025-10-09 10:00:56.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:57.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:57 compute-1 nova_compute[162974]: 2025-10-09 10:00:57.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:58.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:00:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:59.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:00.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:01.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:01 compute-1 podman[170153]: 2025-10-09 10:01:01.53733866 +0000 UTC m=+0.044814541 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 10:01:01 compute-1 podman[170154]: 2025-10-09 10:01:01.542345175 +0000 UTC m=+0.048962528 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3)
Oct  9 10:01:01 compute-1 nova_compute[162974]: 2025-10-09 10:01:01.863 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004046.8596463, c7e917a6-1f6f-4739-a31a-bdcfa52bf93b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:01 compute-1 nova_compute[162974]: 2025-10-09 10:01:01.864 2 INFO nova.compute.manager [-] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] VM Stopped (Lifecycle Event)#033[00m
Oct  9 10:01:01 compute-1 nova_compute[162974]: 2025-10-09 10:01:01.879 2 DEBUG nova.compute.manager [None req-17acc6b4-2175-4760-943c-f00de7baeb42 - - - - - -] [instance: c7e917a6-1f6f-4739-a31a-bdcfa52bf93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:01 compute-1 nova_compute[162974]: 2025-10-09 10:01:01.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:02.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:02 compute-1 nova_compute[162974]: 2025-10-09 10:01:02.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:03.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:04.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:05.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:06.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:06 compute-1 nova_compute[162974]: 2025-10-09 10:01:06.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:07.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:07 compute-1 nova_compute[162974]: 2025-10-09 10:01:07.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:01:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:08.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.309043) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068309075, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 721, "num_deletes": 251, "total_data_size": 1416992, "memory_usage": 1444256, "flush_reason": "Manual Compaction"}
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068312470, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 932885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25222, "largest_seqno": 25938, "table_properties": {"data_size": 929346, "index_size": 1383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8231, "raw_average_key_size": 19, "raw_value_size": 922210, "raw_average_value_size": 2190, "num_data_blocks": 61, "num_entries": 421, "num_filter_entries": 421, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004022, "oldest_key_time": 1760004022, "file_creation_time": 1760004068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 3446 microseconds, and 2330 cpu microseconds.
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312494) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 932885 bytes OK
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312505) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312838) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312849) EVENT_LOG_v1 {"time_micros": 1760004068312846, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312857) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1413098, prev total WAL file size 1413098, number of live WAL files 2.
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.313251) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(911KB)], [48(12MB)]
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068313290, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 14464337, "oldest_snapshot_seqno": -1}
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5350 keys, 12340049 bytes, temperature: kUnknown
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068346148, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12340049, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12304964, "index_size": 20639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 137400, "raw_average_key_size": 25, "raw_value_size": 12208343, "raw_average_value_size": 2281, "num_data_blocks": 837, "num_entries": 5350, "num_filter_entries": 5350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.346451) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12340049 bytes
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347657) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 437.6 rd, 373.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(28.7) write-amplify(13.2) OK, records in: 5866, records dropped: 516 output_compression: NoCompression
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347672) EVENT_LOG_v1 {"time_micros": 1760004068347665, "job": 28, "event": "compaction_finished", "compaction_time_micros": 33052, "compaction_time_cpu_micros": 19656, "output_level": 6, "num_output_files": 1, "total_output_size": 12340049, "num_input_records": 5866, "num_output_records": 5350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068348281, "job": 28, "event": "table_file_deletion", "file_number": 50}
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068350230, "job": 28, "event": "table_file_deletion", "file_number": 48}
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.313172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:01:08.350373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:09.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:10.039 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:10.040 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:10.040 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:10.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:11.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:01:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2910271570' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:01:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:01:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2910271570' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:01:11 compute-1 nova_compute[162974]: 2025-10-09 10:01:11.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:12.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:12 compute-1 nova_compute[162974]: 2025-10-09 10:01:12.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:13.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:13 compute-1 podman[170217]: 2025-10-09 10:01:13.555030617 +0000 UTC m=+0.060470324 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 10:01:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:14.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:01:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:01:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:16.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:16 compute-1 nova_compute[162974]: 2025-10-09 10:01:16.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:01:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:17.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:01:17 compute-1 nova_compute[162974]: 2025-10-09 10:01:17.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:18.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:19.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:20.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:01:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:21.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:01:21 compute-1 nova_compute[162974]: 2025-10-09 10:01:21.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:22.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:22 compute-1 nova_compute[162974]: 2025-10-09 10:01:22.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:23.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:23 compute-1 podman[170245]: 2025-10-09 10:01:23.525208497 +0000 UTC m=+0.037272377 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:01:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:25.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:26.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:26 compute-1 nova_compute[162974]: 2025-10-09 10:01:26.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:27.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:27 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.822 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.823 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.833 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.880 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.880 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.885 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.885 2 INFO nova.compute.claims [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  9 10:01:27 compute-1 nova_compute[162974]: 2025-10-09 10:01:27.951 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:28.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:28 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:28 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1357456667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.291 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.295 2 DEBUG nova.compute.provider_tree [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.306 2 DEBUG nova.scheduler.client.report [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.319 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.320 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.352 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.352 2 DEBUG nova.network.neutron [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.363 2 INFO nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.373 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.441 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.442 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.442 2 INFO nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Creating image(s)#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.461 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.478 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.496 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.499 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.547 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.547 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.548 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.548 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.566 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.568 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.705 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.746 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.753 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.776 2 WARNING nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.776 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Triggering sync for uuid 0fde2924-0ac7-4ea2-b42d-290df3f52929 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.776 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.810 2 DEBUG nova.objects.instance [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid 0fde2924-0ac7-4ea2-b42d-290df3f52929 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.819 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.819 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Ensure instance console log exists: /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.820 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.820 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:28 compute-1 nova_compute[162974]: 2025-10-09 10:01:28.820 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:29.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.228 2 DEBUG nova.policy [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 10:01:29 compute-1 ovn_controller[62080]: 2025-10-09T10:01:29Z|00075|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Oct  9 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.822 2 DEBUG nova.network.neutron [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Successfully updated port: 24c642bf-d3e7-4003-97f5-0e43aca6db7b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.833 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.833 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.833 2 DEBUG nova.network.neutron [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.883 2 DEBUG nova.compute.manager [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received event network-changed-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.883 2 DEBUG nova.compute.manager [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Refreshing instance network info cache due to event network-changed-24c642bf-d3e7-4003-97f5-0e43aca6db7b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.884 2 DEBUG oslo_concurrency.lockutils [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:01:29 compute-1 nova_compute[162974]: 2025-10-09 10:01:29.949 2 DEBUG nova.network.neutron [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 10:01:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:30.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.924 2 DEBUG nova.network.neutron [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Updating instance_info_cache with network_info: [{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.942 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.942 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Instance network_info: |[{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.942 2 DEBUG oslo_concurrency.lockutils [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.943 2 DEBUG nova.network.neutron [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Refreshing network info cache for port 24c642bf-d3e7-4003-97f5-0e43aca6db7b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.945 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Start _get_guest_xml network_info=[{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.949 2 WARNING nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.952 2 DEBUG nova.virt.libvirt.host [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.953 2 DEBUG nova.virt.libvirt.host [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.957 2 DEBUG nova.virt.libvirt.host [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.957 2 DEBUG nova.virt.libvirt.host [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.958 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.958 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.958 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.959 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.959 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.959 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.959 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.959 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.960 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.960 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.960 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.960 2 DEBUG nova.virt.hardware [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 10:01:30 compute-1 nova_compute[162974]: 2025-10-09 10:01:30.962 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.003000028s ======
Oct  9 10:01:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000028s
Oct  9 10:01:31 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:01:31 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1905495853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.321 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.345 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.349 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:31 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:31 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.563 2 DEBUG nova.network.neutron [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Updated VIF entry in instance network info cache for port 24c642bf-d3e7-4003-97f5-0e43aca6db7b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.564 2 DEBUG nova.network.neutron [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Updating instance_info_cache with network_info: [{"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.575 2 DEBUG oslo_concurrency.lockutils [req-26ceb6e2-af8e-4d09-92a0-12b18d238790 req-c3b7885c-1ea3-4280-9028-3f9284e67486 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-0fde2924-0ac7-4ea2-b42d-290df3f52929" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:01:31 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:01:31 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/627844330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.718 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.719 2 DEBUG nova.virt.libvirt.vif [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:01:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-683406071',display_name='tempest-TestNetworkBasicOps-server-683406071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-683406071',id=9,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNC5+71zwS4peThbBj0rTs2iUGxV6KoykdELOAeuqqTcHI7GCX2cJpli9Fly77fC2uQduSSC/CbKmPPAuRDVwt9Ei0C4MDfiTMQHdYKRTolBvlRviK/zoaSsqEMl47FRQ==',key_name='tempest-TestNetworkBasicOps-614246910',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-8z022t5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:01:28Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=0fde2924-0ac7-4ea2-b42d-290df3f52929,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.719 2 DEBUG nova.network.os_vif_util [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.720 2 DEBUG nova.network.os_vif_util [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.721 2 DEBUG nova.objects.instance [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fde2924-0ac7-4ea2-b42d-290df3f52929 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.731 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] End _get_guest_xml xml=<domain type="kvm">
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <uuid>0fde2924-0ac7-4ea2-b42d-290df3f52929</uuid>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <name>instance-00000009</name>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <memory>131072</memory>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <vcpu>1</vcpu>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <metadata>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <nova:name>tempest-TestNetworkBasicOps-server-683406071</nova:name>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <nova:creationTime>2025-10-09 10:01:30</nova:creationTime>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <nova:flavor name="m1.nano">
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <nova:memory>128</nova:memory>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <nova:disk>1</nova:disk>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <nova:swap>0</nova:swap>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <nova:vcpus>1</nova:vcpus>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      </nova:flavor>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <nova:owner>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      </nova:owner>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <nova:ports>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <nova:port uuid="24c642bf-d3e7-4003-97f5-0e43aca6db7b">
Oct  9 10:01:31 compute-1 nova_compute[162974]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        </nova:port>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      </nova:ports>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    </nova:instance>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  </metadata>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <sysinfo type="smbios">
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <system>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <entry name="manufacturer">RDO</entry>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <entry name="product">OpenStack Compute</entry>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <entry name="serial">0fde2924-0ac7-4ea2-b42d-290df3f52929</entry>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <entry name="uuid">0fde2924-0ac7-4ea2-b42d-290df3f52929</entry>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <entry name="family">Virtual Machine</entry>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    </system>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  </sysinfo>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <os>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <boot dev="hd"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <smbios mode="sysinfo"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  </os>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <features>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <acpi/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <apic/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <vmcoreinfo/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  </features>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <clock offset="utc">
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <timer name="hpet" present="no"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  </clock>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <cpu mode="host-model" match="exact">
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  </cpu>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  <devices>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <disk type="network" device="disk">
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/0fde2924-0ac7-4ea2-b42d-290df3f52929_disk">
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      </source>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      </auth>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <target dev="vda" bus="virtio"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    </disk>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <disk type="network" device="cdrom">
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config">
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      </source>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 10:01:31 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      </auth>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <target dev="sda" bus="sata"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    </disk>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <interface type="ethernet">
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <mac address="fa:16:3e:d9:5b:8d"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <mtu size="1442"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <target dev="tap24c642bf-d3"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    </interface>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <serial type="pty">
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <log file="/var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/console.log" append="off"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    </serial>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <video>
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    </video>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <input type="tablet" bus="usb"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <rng model="virtio">
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <backend model="random">/dev/urandom</backend>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    </rng>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <controller type="usb" index="0"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    <memballoon model="virtio">
Oct  9 10:01:31 compute-1 nova_compute[162974]:      <stats period="10"/>
Oct  9 10:01:31 compute-1 nova_compute[162974]:    </memballoon>
Oct  9 10:01:31 compute-1 nova_compute[162974]:  </devices>
Oct  9 10:01:31 compute-1 nova_compute[162974]: </domain>
Oct  9 10:01:31 compute-1 nova_compute[162974]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.732 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Preparing to wait for external event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.732 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.732 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.733 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.733 2 DEBUG nova.virt.libvirt.vif [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:01:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-683406071',display_name='tempest-TestNetworkBasicOps-server-683406071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-683406071',id=9,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNC5+71zwS4peThbBj0rTs2iUGxV6KoykdELOAeuqqTcHI7GCX2cJpli9Fly77fC2uQduSSC/CbKmPPAuRDVwt9Ei0C4MDfiTMQHdYKRTolBvlRviK/zoaSsqEMl47FRQ==',key_name='tempest-TestNetworkBasicOps-614246910',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-8z022t5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:01:28Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=0fde2924-0ac7-4ea2-b42d-290df3f52929,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.733 2 DEBUG nova.network.os_vif_util [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.734 2 DEBUG nova.network.os_vif_util [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.734 2 DEBUG os_vif [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.735 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24c642bf-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.738 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24c642bf-d3, col_values=(('external_ids', {'iface-id': '24c642bf-d3e7-4003-97f5-0e43aca6db7b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:5b:8d', 'vm-uuid': '0fde2924-0ac7-4ea2-b42d-290df3f52929'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:31 compute-1 NetworkManager[982]: <info>  [1760004091.7404] manager: (tap24c642bf-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.746 2 INFO os_vif [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3')#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.780 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.780 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.780 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:d9:5b:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.780 2 INFO nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Using config drive#033[00m
Oct  9 10:01:31 compute-1 nova_compute[162974]: 2025-10-09 10:01:31.801 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.223 2 INFO nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Creating config drive at /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.227 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphw96eop1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.351 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphw96eop1" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.377 2 DEBUG nova.storage.rbd_utils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.380 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.488 2 DEBUG oslo_concurrency.processutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config 0fde2924-0ac7-4ea2-b42d-290df3f52929_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.490 2 INFO nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Deleting local config drive /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929/disk.config because it was imported into RBD.#033[00m
Oct  9 10:01:32 compute-1 kernel: tap24c642bf-d3: entered promiscuous mode
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.5384] manager: (tap24c642bf-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Oct  9 10:01:32 compute-1 ovn_controller[62080]: 2025-10-09T10:01:32Z|00076|binding|INFO|Claiming lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b for this chassis.
Oct  9 10:01:32 compute-1 ovn_controller[62080]: 2025-10-09T10:01:32Z|00077|binding|INFO|24c642bf-d3e7-4003-97f5-0e43aca6db7b: Claiming fa:16:3e:d9:5b:8d 10.100.0.5
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.5478] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct  9 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.5485] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.548 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:5b:8d 10.100.0.5'], port_security=['fa:16:3e:d9:5b:8d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fde2924-0ac7-4ea2-b42d-290df3f52929', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '7', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=887b951a-388d-4a48-aabf-54a7b01d9585, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=24c642bf-d3e7-4003-97f5-0e43aca6db7b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.549 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 24c642bf-d3e7-4003-97f5-0e43aca6db7b in datapath f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 bound to our chassis#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.549 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1bd1d23-0de7-4b9c-b34f-27d8df0f3147#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.558 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[2158f308-8d80-4124-bacb-eebd930491b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.561 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1bd1d23-01 in ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.563 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1bd1d23-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.563 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6150a262-cd47-48a6-93a0-e51f6cfd750a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.564 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6704da-f87c-4ec8-9455-eed8b6f65e3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 systemd-udevd[170717]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.5776] device (tap24c642bf-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.5785] device (tap24c642bf-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.581 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[2daedd3e-4bb8-48a5-8601-8d7da5f683c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 systemd-machined[120683]: New machine qemu-5-instance-00000009.
Oct  9 10:01:32 compute-1 podman[170681]: 2025-10-09 10:01:32.609874793 +0000 UTC m=+0.103838268 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.608 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[dff045ba-9c13-40c1-bf8d-6d96a7ab451b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 systemd[1]: Started Virtual Machine qemu-5-instance-00000009.
Oct  9 10:01:32 compute-1 podman[170680]: 2025-10-09 10:01:32.623355436 +0000 UTC m=+0.117181351 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.641 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[d641ec51-642e-4c62-8dda-4ac838967106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.6469] manager: (tapf1bd1d23-00): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.647 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[443ecede-df85-476c-b8f1-827cb3bfee2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 ovn_controller[62080]: 2025-10-09T10:01:32Z|00078|binding|INFO|Setting lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b ovn-installed in OVS
Oct  9 10:01:32 compute-1 ovn_controller[62080]: 2025-10-09T10:01:32Z|00079|binding|INFO|Setting lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b up in Southbound
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.679 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[482a3b96-5426-4af0-8d6b-15a5c6c24301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.681 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ea4b19-5209-42ed-a42e-2093eb92f3fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.6989] device (tapf1bd1d23-00): carrier: link connected
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.703 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[f153fd8f-7646-4376-88a3-ce2b7c66cde3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.719 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[bf78a294-71bc-4b13-809c-8f8d693163ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1bd1d23-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:76:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 177626, 'reachable_time': 23639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 170776, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.733 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[e9648c23-d829-4a78-9eb7-bbc78d8c340b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:762f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 177626, 'tstamp': 177626}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 170777, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.749 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[48b7c4b1-cba2-47cb-b1e3-57cbd636b255]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1bd1d23-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:76:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 177626, 'reachable_time': 23639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 170778, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.784 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[7070c60c-4ab2-418f-983d-fe6ca70c8dac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.813 2 DEBUG nova.compute.manager [req-27e7f96e-6152-432e-a152-b8b75df0a24d req-f189e7c6-699f-4d0b-85a0-75ac588eb4f6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.814 2 DEBUG oslo_concurrency.lockutils [req-27e7f96e-6152-432e-a152-b8b75df0a24d req-f189e7c6-699f-4d0b-85a0-75ac588eb4f6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.814 2 DEBUG oslo_concurrency.lockutils [req-27e7f96e-6152-432e-a152-b8b75df0a24d req-f189e7c6-699f-4d0b-85a0-75ac588eb4f6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.814 2 DEBUG oslo_concurrency.lockutils [req-27e7f96e-6152-432e-a152-b8b75df0a24d req-f189e7c6-699f-4d0b-85a0-75ac588eb4f6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.815 2 DEBUG nova.compute.manager [req-27e7f96e-6152-432e-a152-b8b75df0a24d req-f189e7c6-699f-4d0b-85a0-75ac588eb4f6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Processing event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.860 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[5303ff51-3967-4a0c-84d8-e5e58a970c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.861 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1bd1d23-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.861 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.861 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1bd1d23-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 NetworkManager[982]: <info>  [1760004092.8638] manager: (tapf1bd1d23-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Oct  9 10:01:32 compute-1 kernel: tapf1bd1d23-00: entered promiscuous mode
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.866 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1bd1d23-00, col_values=(('external_ids', {'iface-id': '8eb8f8eb-7931-447c-950a-c32841e79526'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 ovn_controller[62080]: 2025-10-09T10:01:32Z|00080|binding|INFO|Releasing lport 8eb8f8eb-7931-447c-950a-c32841e79526 from this chassis (sb_readonly=0)
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 nova_compute[162974]: 2025-10-09 10:01:32.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.881 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.882 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6a302e14-01a1-4e4d-96d1-d4487f2d5822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.883 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: global
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    log         /dev/log local0 debug
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    log-tag     haproxy-metadata-proxy-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    user        root
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    group       root
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    maxconn     1024
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    pidfile     /var/lib/neutron/external/pids/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.pid.haproxy
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    daemon
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: defaults
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    log global
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    mode http
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    option httplog
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    option dontlognull
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    option http-server-close
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    option forwardfor
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    retries                 3
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    timeout http-request    30s
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    timeout connect         30s
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    timeout client          32s
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    timeout server          32s
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    timeout http-keep-alive 30s
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: listen listener
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    bind 169.254.169.254:80
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]:    http-request add-header X-OVN-Network-ID f1bd1d23-0de7-4b9c-b34f-27d8df0f3147
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 10:01:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:32.885 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'env', 'PROCESS_TAG=haproxy-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1bd1d23-0de7-4b9c-b34f-27d8df0f3147.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 10:01:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:33 compute-1 podman[170849]: 2025-10-09 10:01:33.209444375 +0000 UTC m=+0.038960569 container create 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:01:33 compute-1 systemd[1]: Started libpod-conmon-0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4.scope.
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.255 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.259 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004093.2581146, 0fde2924-0ac7-4ea2-b42d-290df3f52929 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.260 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] VM Started (Lifecycle Event)#033[00m
Oct  9 10:01:33 compute-1 systemd[1]: Started libcrun container.
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.263 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.266 2 INFO nova.virt.libvirt.driver [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Instance spawned successfully.#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.267 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 10:01:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eaf819a371be319bcec251902a55501d9807b84351499abfedbd74b4f82185b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 10:01:33 compute-1 podman[170849]: 2025-10-09 10:01:33.279969496 +0000 UTC m=+0.109485710 container init 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.280 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:33 compute-1 podman[170849]: 2025-10-09 10:01:33.285366908 +0000 UTC m=+0.114883102 container start 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.287 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:01:33 compute-1 podman[170849]: 2025-10-09 10:01:33.1949705 +0000 UTC m=+0.024486715 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.292 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.292 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.293 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.293 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.293 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.294 2 DEBUG nova.virt.libvirt.driver [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:01:33 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [NOTICE]   (170865) : New worker (170867) forked
Oct  9 10:01:33 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [NOTICE]   (170865) : Loading success.
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.304 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.305 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004093.2582216, 0fde2924-0ac7-4ea2-b42d-290df3f52929 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.305 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] VM Paused (Lifecycle Event)#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.328 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.333 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004093.2632186, 0fde2924-0ac7-4ea2-b42d-290df3f52929 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.334 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] VM Resumed (Lifecycle Event)#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.348 2 INFO nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Took 4.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.348 2 DEBUG nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.350 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.355 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.380 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.399 2 INFO nova.compute.manager [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Took 5.54 seconds to build instance.#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.408 2 DEBUG oslo_concurrency.lockutils [None req-b7ccf91e-3edd-4bfb-8579-db3ccb0dc5a3 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.409 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 4.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.409 2 INFO nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:01:33 compute-1 nova_compute[162974]: 2025-10-09 10:01:33.409 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:34.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.873 2 DEBUG nova.compute.manager [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.874 2 DEBUG oslo_concurrency.lockutils [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.874 2 DEBUG oslo_concurrency.lockutils [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.875 2 DEBUG oslo_concurrency.lockutils [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.875 2 DEBUG nova.compute.manager [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] No waiting events found dispatching network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:01:34 compute-1 nova_compute[162974]: 2025-10-09 10:01:34.875 2 WARNING nova.compute.manager [req-046d4a83-530f-4709-906e-de4532f5e397 req-fd849488-872f-41e8-b434-2aa73a01ae6f b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received unexpected event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b for instance with vm_state active and task_state None.#033[00m
Oct  9 10:01:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:35.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.375 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.375 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.376 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.376 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.377 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.378 2 INFO nova.compute.manager [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Terminating instance#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.379 2 DEBUG nova.compute.manager [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 10:01:35 compute-1 kernel: tap24c642bf-d3 (unregistering): left promiscuous mode
Oct  9 10:01:35 compute-1 NetworkManager[982]: <info>  [1760004095.4019] device (tap24c642bf-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:01:35 compute-1 ovn_controller[62080]: 2025-10-09T10:01:35Z|00081|binding|INFO|Releasing lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b from this chassis (sb_readonly=0)
Oct  9 10:01:35 compute-1 ovn_controller[62080]: 2025-10-09T10:01:35Z|00082|binding|INFO|Setting lport 24c642bf-d3e7-4003-97f5-0e43aca6db7b down in Southbound
Oct  9 10:01:35 compute-1 ovn_controller[62080]: 2025-10-09T10:01:35Z|00083|binding|INFO|Removing iface tap24c642bf-d3 ovn-installed in OVS
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.416 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:5b:8d 10.100.0.5'], port_security=['fa:16:3e:d9:5b:8d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fde2924-0ac7-4ea2-b42d-290df3f52929', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1238411040', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '9', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.223', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=887b951a-388d-4a48-aabf-54a7b01d9585, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=24c642bf-d3e7-4003-97f5-0e43aca6db7b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.418 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 24c642bf-d3e7-4003-97f5-0e43aca6db7b in datapath f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 unbound from our chassis#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.419 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.420 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[aa82bac7-3dcf-45a1-9819-e79b14206069]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.420 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 namespace which is not needed anymore#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:35 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct  9 10:01:35 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000009.scope: Consumed 2.712s CPU time.
Oct  9 10:01:35 compute-1 systemd-machined[120683]: Machine qemu-5-instance-00000009 terminated.
Oct  9 10:01:35 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [NOTICE]   (170865) : haproxy version is 2.8.14-c23fe91
Oct  9 10:01:35 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [NOTICE]   (170865) : path to executable is /usr/sbin/haproxy
Oct  9 10:01:35 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [WARNING]  (170865) : Exiting Master process...
Oct  9 10:01:35 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [ALERT]    (170865) : Current worker (170867) exited with code 143 (Terminated)
Oct  9 10:01:35 compute-1 neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147[170861]: [WARNING]  (170865) : All workers exited. Exiting... (0)
Oct  9 10:01:35 compute-1 systemd[1]: libpod-0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4.scope: Deactivated successfully.
Oct  9 10:01:35 compute-1 conmon[170861]: conmon 0e7b3cff2d349dc218fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4.scope/container/memory.events
Oct  9 10:01:35 compute-1 podman[170892]: 2025-10-09 10:01:35.51893905 +0000 UTC m=+0.033180938 container died 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  9 10:01:35 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4-userdata-shm.mount: Deactivated successfully.
Oct  9 10:01:35 compute-1 systemd[1]: var-lib-containers-storage-overlay-7eaf819a371be319bcec251902a55501d9807b84351499abfedbd74b4f82185b-merged.mount: Deactivated successfully.
Oct  9 10:01:35 compute-1 podman[170892]: 2025-10-09 10:01:35.542259165 +0000 UTC m=+0.056501054 container cleanup 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 10:01:35 compute-1 systemd[1]: libpod-conmon-0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4.scope: Deactivated successfully.
Oct  9 10:01:35 compute-1 podman[170915]: 2025-10-09 10:01:35.585508697 +0000 UTC m=+0.027619949 container remove 0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:01:35 compute-1 NetworkManager[982]: <info>  [1760004095.5919] manager: (tap24c642bf-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.591 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[38d0aa70-e5ae-4c2e-9cf4-e1c77239d333]: (4, ('Thu Oct  9 10:01:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 (0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4)\n0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4\nThu Oct  9 10:01:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 (0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4)\n0e7b3cff2d349dc218fd668816b472146bb100b2580a49e6b7e4a0cf640659b4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.594 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[744e3eed-81a9-4886-b80f-cab12517cf89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.596 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.596 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1bd1d23-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.602 2 INFO nova.virt.libvirt.driver [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Instance destroyed successfully.#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.602 2 DEBUG nova.objects.instance [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid 0fde2924-0ac7-4ea2-b42d-290df3f52929 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.611 2 DEBUG nova.virt.libvirt.vif [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T10:01:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-683406071',display_name='tempest-TestNetworkBasicOps-server-683406071',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-683406071',id=9,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNC5+71zwS4peThbBj0rTs2iUGxV6KoykdELOAeuqqTcHI7GCX2cJpli9Fly77fC2uQduSSC/CbKmPPAuRDVwt9Ei0C4MDfiTMQHdYKRTolBvlRviK/zoaSsqEMl47FRQ==',key_name='tempest-TestNetworkBasicOps-614246910',keypairs=<?>,launch_index=0,launched_at=2025-10-09T10:01:33Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-8z022t5g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T10:01:33Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=0fde2924-0ac7-4ea2-b42d-290df3f52929,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.611 2 DEBUG nova.network.os_vif_util [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "address": "fa:16:3e:d9:5b:8d", "network": {"id": "f1bd1d23-0de7-4b9c-b34f-27d8df0f3147", "bridge": "br-int", "label": "tempest-network-smoke--147591991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c642bf-d3", "ovs_interfaceid": "24c642bf-d3e7-4003-97f5-0e43aca6db7b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.612 2 DEBUG nova.network.os_vif_util [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.612 2 DEBUG os_vif [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.614 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24c642bf-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:35 compute-1 kernel: tapf1bd1d23-00: left promiscuous mode
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.620 2 INFO os_vif [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:5b:8d,bridge_name='br-int',has_traffic_filtering=True,id=24c642bf-d3e7-4003-97f5-0e43aca6db7b,network=Network(f1bd1d23-0de7-4b9c-b34f-27d8df0f3147),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap24c642bf-d3')#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.621 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c50eb19e-7af9-487d-ba1f-53dcbd363b15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.633 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf0c163-af58-4e7e-bc60-288ce20d8b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.634 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[72b8c58c-2fb3-4701-8f44-d2238dfe7d41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.648 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f48e390e-1bc0-497e-81bd-a7502e5e9e9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 177620, 'reachable_time': 25234, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 170952, 'error': None, 'target': 'ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:35 compute-1 systemd[1]: run-netns-ovnmeta\x2df1bd1d23\x2d0de7\x2d4b9c\x2db34f\x2d27d8df0f3147.mount: Deactivated successfully.
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.652 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1bd1d23-0de7-4b9c-b34f-27d8df0f3147 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 10:01:35 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:35.652 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff0bf4a-acbf-4d42-aad2-c60b1d84513b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:01:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.789 2 INFO nova.virt.libvirt.driver [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Deleting instance files /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929_del#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.789 2 INFO nova.virt.libvirt.driver [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Deletion of /var/lib/nova/instances/0fde2924-0ac7-4ea2-b42d-290df3f52929_del complete#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.823 2 INFO nova.compute.manager [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.823 2 DEBUG oslo.service.loopingcall [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.823 2 DEBUG nova.compute.manager [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 10:01:35 compute-1 nova_compute[162974]: 2025-10-09 10:01:35.823 2 DEBUG nova.network.neutron [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 10:01:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:36.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.628 2 DEBUG nova.network.neutron [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.639 2 INFO nova.compute.manager [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Took 0.82 seconds to deallocate network for instance.#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.666 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.666 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.702 2 DEBUG oslo_concurrency.processutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.930 2 DEBUG nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received event network-vif-unplugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.931 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.931 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.932 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.932 2 DEBUG nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] No waiting events found dispatching network-vif-unplugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.932 2 WARNING nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received unexpected event network-vif-unplugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.932 2 DEBUG nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.932 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.933 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.933 2 DEBUG oslo_concurrency.lockutils [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.933 2 DEBUG nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] No waiting events found dispatching network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:01:36 compute-1 nova_compute[162974]: 2025-10-09 10:01:36.933 2 WARNING nova.compute.manager [req-2d7a0aa8-20ba-4d3c-88da-314bdcbea33e req-fe09cd1f-de20-4c15-bc20-87b261db77b6 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Received unexpected event network-vif-plugged-24c642bf-d3e7-4003-97f5-0e43aca6db7b for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:01:37 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:37 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3366232617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.037 2 DEBUG oslo_concurrency.processutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.041 2 DEBUG nova.compute.provider_tree [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.053 2 DEBUG nova.scheduler.client.report [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.065 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:37.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.089 2 INFO nova.scheduler.client.report [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance 0fde2924-0ac7-4ea2-b42d-290df3f52929#033[00m
Oct  9 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.155 2 DEBUG oslo_concurrency.lockutils [None req-99c5a93d-e427-465b-9201-ea0a4e6908de 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "0fde2924-0ac7-4ea2-b42d-290df3f52929" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:37 compute-1 nova_compute[162974]: 2025-10-09 10:01:37.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:38.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:39.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:01:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:40.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:01:40 compute-1 nova_compute[162974]: 2025-10-09 10:01:40.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:40 compute-1 nova_compute[162974]: 2025-10-09 10:01:40.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:40 compute-1 nova_compute[162974]: 2025-10-09 10:01:40.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:41.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:01:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:42.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:01:42 compute-1 nova_compute[162974]: 2025-10-09 10:01:42.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:43.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:44.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:44 compute-1 podman[170988]: 2025-10-09 10:01:44.546646903 +0000 UTC m=+0.056525139 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  9 10:01:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:45.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:45 compute-1 nova_compute[162974]: 2025-10-09 10:01:45.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:46.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:47.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:47 compute-1 nova_compute[162974]: 2025-10-09 10:01:47.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:47.877 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:01:47 compute-1 nova_compute[162974]: 2025-10-09 10:01:47.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:47 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:47.878 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:01:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:48.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:49.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:50.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:50 compute-1 nova_compute[162974]: 2025-10-09 10:01:50.601 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004095.600311, 0fde2924-0ac7-4ea2-b42d-290df3f52929 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:50 compute-1 nova_compute[162974]: 2025-10-09 10:01:50.601 2 INFO nova.compute.manager [-] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] VM Stopped (Lifecycle Event)#033[00m
Oct  9 10:01:50 compute-1 nova_compute[162974]: 2025-10-09 10:01:50.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:50 compute-1 nova_compute[162974]: 2025-10-09 10:01:50.621 2 DEBUG nova.compute.manager [None req-f19ae35d-1ca8-4c6c-9801-9a9826d3d207 - - - - - -] [instance: 0fde2924-0ac7-4ea2-b42d-290df3f52929] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:51.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:51 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:01:51.880 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:01:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:52.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:01:52 compute-1 nova_compute[162974]: 2025-10-09 10:01:52.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:53.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.132 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.132 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.133 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.133 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.133 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:53 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:53 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/865184998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.484 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.687 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.688 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5027MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.689 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.690 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.770 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.770 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:01:53 compute-1 nova_compute[162974]: 2025-10-09 10:01:53.782 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:54 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:54 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1811252629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:54 compute-1 nova_compute[162974]: 2025-10-09 10:01:54.124 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:54 compute-1 nova_compute[162974]: 2025-10-09 10:01:54.127 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:01:54 compute-1 nova_compute[162974]: 2025-10-09 10:01:54.136 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:01:54 compute-1 nova_compute[162974]: 2025-10-09 10:01:54.152 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:01:54 compute-1 nova_compute[162974]: 2025-10-09 10:01:54.152 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:54.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:54 compute-1 podman[171087]: 2025-10-09 10:01:54.534267803 +0000 UTC m=+0.044764097 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 10:01:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:55.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.148 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.149 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.149 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.149 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.149 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.150 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.150 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:01:55 compute-1 nova_compute[162974]: 2025-10-09 10:01:55.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:56 compute-1 nova_compute[162974]: 2025-10-09 10:01:56.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:56 compute-1 nova_compute[162974]: 2025-10-09 10:01:56.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:01:56 compute-1 nova_compute[162974]: 2025-10-09 10:01:56.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:01:56 compute-1 nova_compute[162974]: 2025-10-09 10:01:56.126 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:01:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:56.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:57.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:57 compute-1 nova_compute[162974]: 2025-10-09 10:01:57.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:57 compute-1 nova_compute[162974]: 2025-10-09 10:01:57.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:58.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:01:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:01:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:59.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:00.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:00 compute-1 nova_compute[162974]: 2025-10-09 10:02:00.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:01.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:02.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:02 compute-1 nova_compute[162974]: 2025-10-09 10:02:02.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:03 compute-1 podman[171109]: 2025-10-09 10:02:03.54098963 +0000 UTC m=+0.045068912 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:02:03 compute-1 podman[171110]: 2025-10-09 10:02:03.569853542 +0000 UTC m=+0.070667218 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct  9 10:02:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:04.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:05.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:05 compute-1 nova_compute[162974]: 2025-10-09 10:02:05.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:06.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:07 compute-1 nova_compute[162974]: 2025-10-09 10:02:07.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:08.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:09.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:10.040 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:10.041 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:10.041 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:10.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:10 compute-1 nova_compute[162974]: 2025-10-09 10:02:10.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:11.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:12.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:12 compute-1 nova_compute[162974]: 2025-10-09 10:02:12.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:13.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:14.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:15.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:15 compute-1 podman[171174]: 2025-10-09 10:02:15.569028163 +0000 UTC m=+0.075761438 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true)
Oct  9 10:02:15 compute-1 nova_compute[162974]: 2025-10-09 10:02:15.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:16.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:17.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:17 compute-1 nova_compute[162974]: 2025-10-09 10:02:17.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:17 compute-1 ovn_controller[62080]: 2025-10-09T10:02:17Z|00084|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Oct  9 10:02:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:18.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:19.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:20.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:20 compute-1 nova_compute[162974]: 2025-10-09 10:02:20.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:02:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:02:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:22.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:22 compute-1 nova_compute[162974]: 2025-10-09 10:02:22.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:23.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:24.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:25 compute-1 podman[171202]: 2025-10-09 10:02:25.534865046 +0000 UTC m=+0.038421403 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:02:25 compute-1 nova_compute[162974]: 2025-10-09 10:02:25.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:27 compute-1 nova_compute[162974]: 2025-10-09 10:02:27.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:28.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:29.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:30.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:30 compute-1 nova_compute[162974]: 2025-10-09 10:02:30.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:32.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:32 compute-1 nova_compute[162974]: 2025-10-09 10:02:32.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:33.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:33 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:34.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:34 compute-1 podman[171326]: 2025-10-09 10:02:34.544323614 +0000 UTC m=+0.047408786 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  9 10:02:34 compute-1 podman[171327]: 2025-10-09 10:02:34.544398565 +0000 UTC m=+0.046549267 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd)
Oct  9 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:34 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.057 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.058 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.068 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.122 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.123 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.127 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.128 2 INFO nova.compute.claims [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  9 10:02:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.191 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:02:35 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4131332810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.544 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.352s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.548 2 DEBUG nova.compute.provider_tree [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.561 2 DEBUG nova.scheduler.client.report [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.575 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.576 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.611 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.611 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.638 2 INFO nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.652 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.716 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.716 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.717 2 INFO nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Creating image(s)#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.737 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.761 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.785 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.789 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.849 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.850 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.851 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.851 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.872 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:35 compute-1 nova_compute[162974]: 2025-10-09 10:02:35.875 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.019 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.070 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.131 2 DEBUG nova.objects.instance [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid 21bbcca2-5cec-4324-9af4-6d2090b6b113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.138 2 DEBUG nova.policy [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.143 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.143 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Ensure instance console log exists: /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.144 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.144 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.144 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:36.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:36 compute-1 nova_compute[162974]: 2025-10-09 10:02:36.604 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Successfully created port: 52ec2db5-2e22-45a7-92ee-f0e360776c10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 10:02:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:37.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:37 compute-1 nova_compute[162974]: 2025-10-09 10:02:37.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.158 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Successfully updated port: 52ec2db5-2e22-45a7-92ee-f0e360776c10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.169 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.169 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.169 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.235 2 DEBUG nova.compute.manager [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.235 2 DEBUG nova.compute.manager [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing instance network info cache due to event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.235 2 DEBUG oslo_concurrency.lockutils [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:02:38 compute-1 nova_compute[162974]: 2025-10-09 10:02:38.285 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 10:02:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:38.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:38 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.138 2 DEBUG nova.network.neutron [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:39.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.156 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.156 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Instance network_info: |[{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.157 2 DEBUG oslo_concurrency.lockutils [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.157 2 DEBUG nova.network.neutron [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.160 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Start _get_guest_xml network_info=[{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.163 2 WARNING nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.167 2 DEBUG nova.virt.libvirt.host [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.168 2 DEBUG nova.virt.libvirt.host [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.173 2 DEBUG nova.virt.libvirt.host [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.173 2 DEBUG nova.virt.libvirt.host [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.173 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.174 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.174 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.174 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.174 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.174 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.175 2 DEBUG nova.virt.hardware [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.178 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:39 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:02:39 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2502987184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.559 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.579 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.585 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:39 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:02:39 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1572329529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.958 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.960 2 DEBUG nova.virt.libvirt.vif [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:02:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-670315443',display_name='tempest-TestNetworkBasicOps-server-670315443',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-670315443',id=11,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMC7xAI/YYK+cbn0PHRoxiiahdIQdKccwfERXZSRnLEKnS9i37SYurywRQCZNQPHgGjlY2G9Hgc0qmCz+iCo4fLyxnirlBRGL3WmP1CDMLNiBavqZTIOedAyGcrchrWbVA==',key_name='tempest-TestNetworkBasicOps-462284814',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-wq2l0ql1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:02:35Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=21bbcca2-5cec-4324-9af4-6d2090b6b113,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.960 2 DEBUG nova.network.os_vif_util [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.961 2 DEBUG nova.network.os_vif_util [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.961 2 DEBUG nova.objects.instance [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid 21bbcca2-5cec-4324-9af4-6d2090b6b113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.981 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] End _get_guest_xml xml=<domain type="kvm">
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <uuid>21bbcca2-5cec-4324-9af4-6d2090b6b113</uuid>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <name>instance-0000000b</name>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <memory>131072</memory>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <vcpu>1</vcpu>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <metadata>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <nova:name>tempest-TestNetworkBasicOps-server-670315443</nova:name>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <nova:creationTime>2025-10-09 10:02:39</nova:creationTime>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <nova:flavor name="m1.nano">
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <nova:memory>128</nova:memory>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <nova:disk>1</nova:disk>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <nova:swap>0</nova:swap>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <nova:vcpus>1</nova:vcpus>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      </nova:flavor>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <nova:owner>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      </nova:owner>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <nova:ports>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <nova:port uuid="52ec2db5-2e22-45a7-92ee-f0e360776c10">
Oct  9 10:02:39 compute-1 nova_compute[162974]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        </nova:port>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      </nova:ports>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    </nova:instance>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  </metadata>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <sysinfo type="smbios">
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <system>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <entry name="manufacturer">RDO</entry>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <entry name="product">OpenStack Compute</entry>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <entry name="serial">21bbcca2-5cec-4324-9af4-6d2090b6b113</entry>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <entry name="uuid">21bbcca2-5cec-4324-9af4-6d2090b6b113</entry>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <entry name="family">Virtual Machine</entry>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    </system>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  </sysinfo>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <os>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <boot dev="hd"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <smbios mode="sysinfo"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  </os>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <features>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <acpi/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <apic/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <vmcoreinfo/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  </features>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <clock offset="utc">
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <timer name="hpet" present="no"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  </clock>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <cpu mode="host-model" match="exact">
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  </cpu>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  <devices>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <disk type="network" device="disk">
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/21bbcca2-5cec-4324-9af4-6d2090b6b113_disk">
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      </source>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      </auth>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <target dev="vda" bus="virtio"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    </disk>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <disk type="network" device="cdrom">
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config">
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      </source>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 10:02:39 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      </auth>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <target dev="sda" bus="sata"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    </disk>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <interface type="ethernet">
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <mac address="fa:16:3e:19:24:81"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <mtu size="1442"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <target dev="tap52ec2db5-2e"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    </interface>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <serial type="pty">
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <log file="/var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/console.log" append="off"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    </serial>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <video>
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    </video>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <input type="tablet" bus="usb"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <rng model="virtio">
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <backend model="random">/dev/urandom</backend>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    </rng>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <controller type="usb" index="0"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    <memballoon model="virtio">
Oct  9 10:02:39 compute-1 nova_compute[162974]:      <stats period="10"/>
Oct  9 10:02:39 compute-1 nova_compute[162974]:    </memballoon>
Oct  9 10:02:39 compute-1 nova_compute[162974]:  </devices>
Oct  9 10:02:39 compute-1 nova_compute[162974]: </domain>
Oct  9 10:02:39 compute-1 nova_compute[162974]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.982 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Preparing to wait for external event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.982 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.982 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.983 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.983 2 DEBUG nova.virt.libvirt.vif [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:02:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-670315443',display_name='tempest-TestNetworkBasicOps-server-670315443',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-670315443',id=11,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMC7xAI/YYK+cbn0PHRoxiiahdIQdKccwfERXZSRnLEKnS9i37SYurywRQCZNQPHgGjlY2G9Hgc0qmCz+iCo4fLyxnirlBRGL3WmP1CDMLNiBavqZTIOedAyGcrchrWbVA==',key_name='tempest-TestNetworkBasicOps-462284814',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-wq2l0ql1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:02:35Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=21bbcca2-5cec-4324-9af4-6d2090b6b113,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.984 2 DEBUG nova.network.os_vif_util [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.984 2 DEBUG nova.network.os_vif_util [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.985 2 DEBUG os_vif [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.986 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.991 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52ec2db5-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52ec2db5-2e, col_values=(('external_ids', {'iface-id': '52ec2db5-2e22-45a7-92ee-f0e360776c10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:24:81', 'vm-uuid': '21bbcca2-5cec-4324-9af4-6d2090b6b113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:39 compute-1 NetworkManager[982]: <info>  [1760004159.9946] manager: (tap52ec2db5-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct  9 10:02:39 compute-1 nova_compute[162974]: 2025-10-09 10:02:39.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.002 2 INFO os_vif [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e')#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.035 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.036 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.036 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:19:24:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.036 2 INFO nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Using config drive#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.055 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.164 2 DEBUG nova.network.neutron [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated VIF entry in instance network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.165 2 DEBUG nova.network.neutron [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.181 2 DEBUG oslo_concurrency.lockutils [req-5f81ecd8-2d00-4a0f-9169-98ac7f8ea3fb req-a4fafe2e-da1b-4fb7-87e2-8e8ec37fc676 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:02:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:02:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:40.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.367 2 INFO nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Creating config drive at /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.371 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wi_r0wp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.499 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9wi_r0wp" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.525 2 DEBUG nova.storage.rbd_utils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.529 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.639 2 DEBUG oslo_concurrency.processutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config 21bbcca2-5cec-4324-9af4-6d2090b6b113_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.640 2 INFO nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Deleting local config drive /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113/disk.config because it was imported into RBD.#033[00m
Oct  9 10:02:40 compute-1 kernel: tap52ec2db5-2e: entered promiscuous mode
Oct  9 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.6862] manager: (tap52ec2db5-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Oct  9 10:02:40 compute-1 ovn_controller[62080]: 2025-10-09T10:02:40Z|00085|binding|INFO|Claiming lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 for this chassis.
Oct  9 10:02:40 compute-1 ovn_controller[62080]: 2025-10-09T10:02:40Z|00086|binding|INFO|52ec2db5-2e22-45a7-92ee-f0e360776c10: Claiming fa:16:3e:19:24:81 10.100.0.8
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.698 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:24:81 10.100.0.8'], port_security=['fa:16:3e:19:24:81 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '21bbcca2-5cec-4324-9af4-6d2090b6b113', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e36da7d-913d-4101-a7c2-e1698abf35be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9cb722ba-1853-4a45-bd00-f5690460099e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49a2e1f-bde0-4698-a31c-366cd4b00fe5, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=52ec2db5-2e22-45a7-92ee-f0e360776c10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.699 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 52ec2db5-2e22-45a7-92ee-f0e360776c10 in datapath 7e36da7d-913d-4101-a7c2-e1698abf35be bound to our chassis#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.700 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e36da7d-913d-4101-a7c2-e1698abf35be#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.711 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[72a6ca2d-b74f-447a-b11d-942b3100ed7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.712 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e36da7d-91 in ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.713 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e36da7d-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.714 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[4078a9ae-488a-4de9-ba48-82aa0ed3c5c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.714 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[31de3aac-71e9-4cdd-b928-f90bf67c0748]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 systemd-udevd[171711]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:02:40 compute-1 systemd-machined[120683]: New machine qemu-6-instance-0000000b.
Oct  9 10:02:40 compute-1 systemd[1]: Started Virtual Machine qemu-6-instance-0000000b.
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.726 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[485bfd74-9c90-4643-b2a2-aeba376c0639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.7311] device (tap52ec2db5-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.7317] device (tap52ec2db5-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.748 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[88c361d3-08b1-4b86-b0fd-d907b18b0c50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:40 compute-1 ovn_controller[62080]: 2025-10-09T10:02:40Z|00087|binding|INFO|Setting lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 ovn-installed in OVS
Oct  9 10:02:40 compute-1 ovn_controller[62080]: 2025-10-09T10:02:40Z|00088|binding|INFO|Setting lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 up in Southbound
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.774 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[ced34f8a-932e-41c1-87e9-4adf3476dccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.7797] manager: (tap7e36da7d-90): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.779 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[20620a93-f21e-4225-9297-fae8ef98145c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 systemd-udevd[171714]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.799 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[f79b566a-3589-47c4-b9c4-525ad79c46fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.801 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[39e632de-055a-4044-8bf7-1478780bd490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.8150] device (tap7e36da7d-90): carrier: link connected
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.818 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[753562fc-4596-4c5c-bc7f-3b0c3b856a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.832 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4a77ae-f335-48ae-9ece-6831863974a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e36da7d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a3:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 184438, 'reachable_time': 28208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 171735, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.847 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1c569013-2f48-4449-b55b-303c33b6a8ee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed9:a343'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 184438, 'tstamp': 184438}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 171736, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.856 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a95cd2b2-b192-486d-a443-c1369aaf2146]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e36da7d-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a3:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 184438, 'reachable_time': 28208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 171737, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.875 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[871d9e1d-013c-43b2-850d-0796935cd216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.908 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[052f7b86-3d3e-410b-af33-2fcdd679e306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.909 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e36da7d-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.910 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.911 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e36da7d-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:40 compute-1 kernel: tap7e36da7d-90: entered promiscuous mode
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:40 compute-1 NetworkManager[982]: <info>  [1760004160.9142] manager: (tap7e36da7d-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.916 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e36da7d-90, col_values=(('external_ids', {'iface-id': 'e74168ad-5871-4088-b5cd-db351251a793'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:40 compute-1 ovn_controller[62080]: 2025-10-09T10:02:40Z|00089|binding|INFO|Releasing lport e74168ad-5871-4088-b5cd-db351251a793 from this chassis (sb_readonly=0)
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.921 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e36da7d-913d-4101-a7c2-e1698abf35be.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e36da7d-913d-4101-a7c2-e1698abf35be.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.921 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[fab09887-f639-45ba-b217-09f6c9d75175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.922 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: global
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    log         /dev/log local0 debug
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    log-tag     haproxy-metadata-proxy-7e36da7d-913d-4101-a7c2-e1698abf35be
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    user        root
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    group       root
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    maxconn     1024
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    pidfile     /var/lib/neutron/external/pids/7e36da7d-913d-4101-a7c2-e1698abf35be.pid.haproxy
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    daemon
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: defaults
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    log global
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    mode http
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    option httplog
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    option dontlognull
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    option http-server-close
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    option forwardfor
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    retries                 3
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    timeout http-request    30s
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    timeout connect         30s
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    timeout client          32s
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    timeout server          32s
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    timeout http-keep-alive 30s
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: listen listener
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    bind 169.254.169.254:80
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]:    http-request add-header X-OVN-Network-ID 7e36da7d-913d-4101-a7c2-e1698abf35be
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 10:02:40 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:40.923 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'env', 'PROCESS_TAG=haproxy-7e36da7d-913d-4101-a7c2-e1698abf35be', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e36da7d-913d-4101-a7c2-e1698abf35be.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.957 2 DEBUG nova.compute.manager [req-b98d5d52-6832-4147-8438-9a67e468898f req-75133154-314f-4b22-ba63-6b714c9a9390 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.958 2 DEBUG oslo_concurrency.lockutils [req-b98d5d52-6832-4147-8438-9a67e468898f req-75133154-314f-4b22-ba63-6b714c9a9390 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.958 2 DEBUG oslo_concurrency.lockutils [req-b98d5d52-6832-4147-8438-9a67e468898f req-75133154-314f-4b22-ba63-6b714c9a9390 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.958 2 DEBUG oslo_concurrency.lockutils [req-b98d5d52-6832-4147-8438-9a67e468898f req-75133154-314f-4b22-ba63-6b714c9a9390 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:40 compute-1 nova_compute[162974]: 2025-10-09 10:02:40.959 2 DEBUG nova.compute.manager [req-b98d5d52-6832-4147-8438-9a67e468898f req-75133154-314f-4b22-ba63-6b714c9a9390 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Processing event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 10:02:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:41.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:41 compute-1 podman[171808]: 2025-10-09 10:02:41.236206895 +0000 UTC m=+0.043427931 container create 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 10:02:41 compute-1 systemd[1]: Started libpod-conmon-1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2.scope.
Oct  9 10:02:41 compute-1 systemd[1]: Started libcrun container.
Oct  9 10:02:41 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c62e9def9ad6686c7da16f6d7f5c0040e366e3403846fd843dae5358e993db42/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 10:02:41 compute-1 podman[171808]: 2025-10-09 10:02:41.214399343 +0000 UTC m=+0.021620390 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 10:02:41 compute-1 podman[171808]: 2025-10-09 10:02:41.310808504 +0000 UTC m=+0.118029541 container init 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  9 10:02:41 compute-1 podman[171808]: 2025-10-09 10:02:41.315175457 +0000 UTC m=+0.122396495 container start 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:02:41 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [NOTICE]   (171823) : New worker (171825) forked
Oct  9 10:02:41 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [NOTICE]   (171823) : Loading success.
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.390 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004161.389727, 21bbcca2-5cec-4324-9af4-6d2090b6b113 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.390 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] VM Started (Lifecycle Event)#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.392 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.394 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.396 2 INFO nova.virt.libvirt.driver [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Instance spawned successfully.#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.396 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.411 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.414 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.420 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.420 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.420 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.421 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.421 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.421 2 DEBUG nova.virt.libvirt.driver [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.427 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.427 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004161.3898058, 21bbcca2-5cec-4324-9af4-6d2090b6b113 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.427 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] VM Paused (Lifecycle Event)#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.443 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.444 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004161.3939636, 21bbcca2-5cec-4324-9af4-6d2090b6b113 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.445 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] VM Resumed (Lifecycle Event)#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.458 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.459 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.465 2 INFO nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Took 5.75 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.465 2 DEBUG nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.470 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.505 2 INFO nova.compute.manager [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Took 6.41 seconds to build instance.#033[00m
Oct  9 10:02:41 compute-1 nova_compute[162974]: 2025-10-09 10:02:41.515 2 DEBUG oslo_concurrency.lockutils [None req-0421d1cf-1a66-47db-b6f9-653f371b7629 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:42 compute-1 nova_compute[162974]: 2025-10-09 10:02:42.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.014 2 DEBUG nova.compute.manager [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.015 2 DEBUG oslo_concurrency.lockutils [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.015 2 DEBUG oslo_concurrency.lockutils [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.015 2 DEBUG oslo_concurrency.lockutils [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.015 2 DEBUG nova.compute.manager [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.015 2 WARNING nova.compute.manager [req-bf600a7e-2ced-4646-aeae-38f1303f0eec req-3898323e-d409-4b57-a877-5d51f6d33c00 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state active and task_state None.#033[00m
Oct  9 10:02:43 compute-1 ovn_controller[62080]: 2025-10-09T10:02:43Z|00090|binding|INFO|Releasing lport e74168ad-5871-4088-b5cd-db351251a793 from this chassis (sb_readonly=0)
Oct  9 10:02:43 compute-1 NetworkManager[982]: <info>  [1760004163.1321] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct  9 10:02:43 compute-1 NetworkManager[982]: <info>  [1760004163.1328] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:43.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:43 compute-1 ovn_controller[62080]: 2025-10-09T10:02:43Z|00091|binding|INFO|Releasing lport e74168ad-5871-4088-b5cd-db351251a793 from this chassis (sb_readonly=0)
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.315 2 DEBUG nova.compute.manager [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.315 2 DEBUG nova.compute.manager [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing instance network info cache due to event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.316 2 DEBUG oslo_concurrency.lockutils [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.316 2 DEBUG oslo_concurrency.lockutils [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:02:43 compute-1 nova_compute[162974]: 2025-10-09 10:02:43.316 2 DEBUG nova.network.neutron [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:02:44 compute-1 nova_compute[162974]: 2025-10-09 10:02:44.141 2 DEBUG nova.network.neutron [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated VIF entry in instance network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:02:44 compute-1 nova_compute[162974]: 2025-10-09 10:02:44.141 2 DEBUG nova.network.neutron [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:44 compute-1 nova_compute[162974]: 2025-10-09 10:02:44.151 2 DEBUG oslo_concurrency.lockutils [req-a784a505-552d-48eb-b865-c9f9ce5036d6 req-0c711b1b-e47d-4e97-b284-5eadab1393ea b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:02:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:44.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:44 compute-1 nova_compute[162974]: 2025-10-09 10:02:44.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:45.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:46.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:46 compute-1 podman[171834]: 2025-10-09 10:02:46.548566766 +0000 UTC m=+0.059610189 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 10:02:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:47.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:47 compute-1 nova_compute[162974]: 2025-10-09 10:02:47.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:48.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:49.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:49 compute-1 nova_compute[162974]: 2025-10-09 10:02:49.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:50.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:51 compute-1 nova_compute[162974]: 2025-10-09 10:02:51.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:51.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:52 compute-1 ovn_controller[62080]: 2025-10-09T10:02:52Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:19:24:81 10.100.0.8
Oct  9 10:02:52 compute-1 ovn_controller[62080]: 2025-10-09T10:02:52Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:19:24:81 10.100.0.8
Oct  9 10:02:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:52.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:52 compute-1 nova_compute[162974]: 2025-10-09 10:02:52.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:53.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:54 compute-1 nova_compute[162974]: 2025-10-09 10:02:54.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:54.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.136 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.136 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.137 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.137 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.137 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:55.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:02:55 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2548672515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.523 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.573 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.574 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 10:02:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.803 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.806 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4852MB free_disk=59.94662857055664GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.806 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.807 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.861 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance 21bbcca2-5cec-4324-9af4-6d2090b6b113 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.861 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.861 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:02:55 compute-1 nova_compute[162974]: 2025-10-09 10:02:55.886 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:56.156 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:02:56 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:02:56.157 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:02:56 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/371860424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.247 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.250 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.265 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.280 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:02:56 compute-1 nova_compute[162974]: 2025-10-09 10:02:56.280 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:56.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:56 compute-1 podman[171932]: 2025-10-09 10:02:56.532401714 +0000 UTC m=+0.044257765 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:02:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:57.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.282 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.282 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.282 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.461 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.462 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.462 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  9 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.462 2 DEBUG nova.objects.instance [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 21bbcca2-5cec-4324-9af4-6d2090b6b113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:02:57 compute-1 nova_compute[162974]: 2025-10-09 10:02:57.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.239 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.253 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.254 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  9 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.254 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.254 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.254 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.254 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:58 compute-1 nova_compute[162974]: 2025-10-09 10:02:58.255 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:58.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:02:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:59.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:00 compute-1 nova_compute[162974]: 2025-10-09 10:03:00.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:00.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:01 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:01.159 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:01.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:02 compute-1 nova_compute[162974]: 2025-10-09 10:03:02.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:03.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:04.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:05 compute-1 nova_compute[162974]: 2025-10-09 10:03:05.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:05.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:05 compute-1 podman[171954]: 2025-10-09 10:03:05.535440067 +0000 UTC m=+0.041238624 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:03:05 compute-1 podman[171953]: 2025-10-09 10:03:05.557267386 +0000 UTC m=+0.063502276 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 10:03:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:06.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:07.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:07 compute-1 nova_compute[162974]: 2025-10-09 10:03:07.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:08.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:09.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:10 compute-1 nova_compute[162974]: 2025-10-09 10:03:10.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:10.042 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:10.042 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:10.043 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:10.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:11.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:12.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:12 compute-1 nova_compute[162974]: 2025-10-09 10:03:12.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:13.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:14 compute-1 nova_compute[162974]: 2025-10-09 10:03:14.307 2 INFO nova.compute.manager [None req-51fa9074-1180-4247-b844-3f0c12fcbe0e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Get console output#033[00m
Oct  9 10:03:14 compute-1 nova_compute[162974]: 2025-10-09 10:03:14.318 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 10:03:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:14.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.117 2 DEBUG nova.compute.manager [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.118 2 DEBUG nova.compute.manager [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing instance network info cache due to event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.118 2 DEBUG oslo_concurrency.lockutils [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.118 2 DEBUG oslo_concurrency.lockutils [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:03:15 compute-1 nova_compute[162974]: 2025-10-09 10:03:15.118 2 DEBUG nova.network.neutron [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:03:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:15.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:16 compute-1 nova_compute[162974]: 2025-10-09 10:03:16.120 2 INFO nova.compute.manager [None req-383ab36a-f7e6-4d44-9386-b58203a8332c 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Get console output#033[00m
Oct  9 10:03:16 compute-1 nova_compute[162974]: 2025-10-09 10:03:16.125 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 10:03:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:16.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:16 compute-1 nova_compute[162974]: 2025-10-09 10:03:16.440 2 DEBUG nova.network.neutron [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated VIF entry in instance network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:03:16 compute-1 nova_compute[162974]: 2025-10-09 10:03:16.441 2 DEBUG nova.network.neutron [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:16 compute-1 nova_compute[162974]: 2025-10-09 10:03:16.453 2 DEBUG oslo_concurrency.lockutils [req-b113bdb1-3bce-48c0-8fe2-3658a9636e29 req-d83f081b-3010-426e-9669-6760a33ab494 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:03:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:17.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.211 2 DEBUG nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.212 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.212 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.212 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.212 2 DEBUG nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.213 2 WARNING nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state active and task_state None.#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.213 2 DEBUG nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.213 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.214 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.214 2 DEBUG oslo_concurrency.lockutils [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.214 2 DEBUG nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.214 2 WARNING nova.compute.manager [req-e7f8727c-8600-4684-9fe8-95e774cf8478 req-c80840a3-ea6e-4a95-96a8-716d19c60696 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state active and task_state None.#033[00m
Oct  9 10:03:17 compute-1 podman[172020]: 2025-10-09 10:03:17.554962845 +0000 UTC m=+0.059464195 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.858 2 DEBUG nova.compute.manager [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.858 2 DEBUG nova.compute.manager [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing instance network info cache due to event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.858 2 DEBUG oslo_concurrency.lockutils [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.858 2 DEBUG oslo_concurrency.lockutils [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:03:17 compute-1 nova_compute[162974]: 2025-10-09 10:03:17.859 2 DEBUG nova.network.neutron [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:03:18 compute-1 nova_compute[162974]: 2025-10-09 10:03:18.001 2 INFO nova.compute.manager [None req-b145a251-9669-4ba9-beb2-44183637d49e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Get console output#033[00m
Oct  9 10:03:18 compute-1 nova_compute[162974]: 2025-10-09 10:03:18.006 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 10:03:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:18.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:19.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.294 2 DEBUG nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.294 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.294 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 WARNING nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state active and task_state None.#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.295 2 DEBUG oslo_concurrency.lockutils [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.296 2 DEBUG nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.296 2 WARNING nova.compute.manager [req-57d3fc54-b9d4-4ed8-a243-2a657849dc40 req-ddb4c56f-52e6-45d7-9f49-d9a02022e07e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state active and task_state None.#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.405 2 DEBUG nova.network.neutron [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated VIF entry in instance network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.406 2 DEBUG nova.network.neutron [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:19 compute-1 nova_compute[162974]: 2025-10-09 10:03:19.422 2 DEBUG oslo_concurrency.lockutils [req-1c708b61-20fb-4b22-be51-36a26370fdfa req-05e0e7c5-8ba3-4fc3-aa64-e42fa7b50825 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:03:20 compute-1 nova_compute[162974]: 2025-10-09 10:03:20.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:20.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:21.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:22.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:22 compute-1 nova_compute[162974]: 2025-10-09 10:03:22.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.042 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.042 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.042 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.042 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.043 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.044 2 INFO nova.compute.manager [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Terminating instance#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.045 2 DEBUG nova.compute.manager [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 10:03:23 compute-1 kernel: tap52ec2db5-2e (unregistering): left promiscuous mode
Oct  9 10:03:23 compute-1 NetworkManager[982]: <info>  [1760004203.0847] device (tap52ec2db5-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00092|binding|INFO|Releasing lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 from this chassis (sb_readonly=0)
Oct  9 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00093|binding|INFO|Setting lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 down in Southbound
Oct  9 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00094|binding|INFO|Removing iface tap52ec2db5-2e ovn-installed in OVS
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.101 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:24:81 10.100.0.8'], port_security=['fa:16:3e:19:24:81 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '21bbcca2-5cec-4324-9af4-6d2090b6b113', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e36da7d-913d-4101-a7c2-e1698abf35be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9cb722ba-1853-4a45-bd00-f5690460099e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49a2e1f-bde0-4698-a31c-366cd4b00fe5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=52ec2db5-2e22-45a7-92ee-f0e360776c10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.102 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 52ec2db5-2e22-45a7-92ee-f0e360776c10 in datapath 7e36da7d-913d-4101-a7c2-e1698abf35be unbound from our chassis#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.103 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e36da7d-913d-4101-a7c2-e1698abf35be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.104 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[7a24e2c8-1b32-4a01-8a1d-0069146b07c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.106 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be namespace which is not needed anymore#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct  9 10:03:23 compute-1 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Consumed 11.634s CPU time.
Oct  9 10:03:23 compute-1 systemd-machined[120683]: Machine qemu-6-instance-0000000b terminated.
Oct  9 10:03:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:23.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:23 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [NOTICE]   (171823) : haproxy version is 2.8.14-c23fe91
Oct  9 10:03:23 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [NOTICE]   (171823) : path to executable is /usr/sbin/haproxy
Oct  9 10:03:23 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [ALERT]    (171823) : Current worker (171825) exited with code 143 (Terminated)
Oct  9 10:03:23 compute-1 neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be[171819]: [WARNING]  (171823) : All workers exited. Exiting... (0)
Oct  9 10:03:23 compute-1 systemd[1]: libpod-1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2.scope: Deactivated successfully.
Oct  9 10:03:23 compute-1 podman[172066]: 2025-10-09 10:03:23.228075841 +0000 UTC m=+0.042111401 container died 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:03:23 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2-userdata-shm.mount: Deactivated successfully.
Oct  9 10:03:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-c62e9def9ad6686c7da16f6d7f5c0040e366e3403846fd843dae5358e993db42-merged.mount: Deactivated successfully.
Oct  9 10:03:23 compute-1 podman[172066]: 2025-10-09 10:03:23.2550432 +0000 UTC m=+0.069078760 container cleanup 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:03:23 compute-1 kernel: tap52ec2db5-2e: entered promiscuous mode
Oct  9 10:03:23 compute-1 NetworkManager[982]: <info>  [1760004203.2577] manager: (tap52ec2db5-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 kernel: tap52ec2db5-2e (unregistering): left promiscuous mode
Oct  9 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00095|binding|INFO|Claiming lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 for this chassis.
Oct  9 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00096|binding|INFO|52ec2db5-2e22-45a7-92ee-f0e360776c10: Claiming fa:16:3e:19:24:81 10.100.0.8
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.275 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:24:81 10.100.0.8'], port_security=['fa:16:3e:19:24:81 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '21bbcca2-5cec-4324-9af4-6d2090b6b113', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e36da7d-913d-4101-a7c2-e1698abf35be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9cb722ba-1853-4a45-bd00-f5690460099e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49a2e1f-bde0-4698-a31c-366cd4b00fe5, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=52ec2db5-2e22-45a7-92ee-f0e360776c10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:03:23 compute-1 ovn_controller[62080]: 2025-10-09T10:03:23Z|00097|binding|INFO|Releasing lport 52ec2db5-2e22-45a7-92ee-f0e360776c10 from this chassis (sb_readonly=0)
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.289 2 INFO nova.virt.libvirt.driver [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Instance destroyed successfully.#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.289 2 DEBUG nova.objects.instance [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid 21bbcca2-5cec-4324-9af4-6d2090b6b113 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.293 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:24:81 10.100.0.8'], port_security=['fa:16:3e:19:24:81 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '21bbcca2-5cec-4324-9af4-6d2090b6b113', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e36da7d-913d-4101-a7c2-e1698abf35be', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9cb722ba-1853-4a45-bd00-f5690460099e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e49a2e1f-bde0-4698-a31c-366cd4b00fe5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=52ec2db5-2e22-45a7-92ee-f0e360776c10) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:03:23 compute-1 systemd[1]: libpod-conmon-1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2.scope: Deactivated successfully.
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.298 2 DEBUG nova.virt.libvirt.vif [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T10:02:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-670315443',display_name='tempest-TestNetworkBasicOps-server-670315443',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-670315443',id=11,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMC7xAI/YYK+cbn0PHRoxiiahdIQdKccwfERXZSRnLEKnS9i37SYurywRQCZNQPHgGjlY2G9Hgc0qmCz+iCo4fLyxnirlBRGL3WmP1CDMLNiBavqZTIOedAyGcrchrWbVA==',key_name='tempest-TestNetworkBasicOps-462284814',keypairs=<?>,launch_index=0,launched_at=2025-10-09T10:02:41Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-wq2l0ql1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T10:02:41Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=21bbcca2-5cec-4324-9af4-6d2090b6b113,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.298 2 DEBUG nova.network.os_vif_util [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.299 2 DEBUG nova.network.os_vif_util [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.299 2 DEBUG os_vif [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.301 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52ec2db5-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.306 2 INFO os_vif [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:19:24:81,bridge_name='br-int',has_traffic_filtering=True,id=52ec2db5-2e22-45a7-92ee-f0e360776c10,network=Network(7e36da7d-913d-4101-a7c2-e1698abf35be),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52ec2db5-2e')#033[00m
Oct  9 10:03:23 compute-1 podman[172096]: 2025-10-09 10:03:23.326416122 +0000 UTC m=+0.046259159 container remove 1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.331 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0b7e44-d6a3-4d0b-8ad5-8ba81a9bf66b]: (4, ('Thu Oct  9 10:03:23 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be (1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2)\n1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2\nThu Oct  9 10:03:23 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be (1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2)\n1d8c119dbd0dc9dd6bf423c0fb8248e2c3d999a3cf60f90d9a00ca713357add2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.332 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[e1c02ad1-9724-41ea-8c3a-0633e72384a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.333 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e36da7d-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:23 compute-1 kernel: tap7e36da7d-90: left promiscuous mode
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.353 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1ede6934-2d05-46f7-a57e-b3a7dfe185a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.370 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[732c4fbc-b310-4922-a618-445cffd226de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.371 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f62697-f7f3-4c22-94c5-0da840efc786]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.386 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6092d29e-683b-48c6-bc34-8683767dd9e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 184434, 'reachable_time': 24754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 172132, 'error': None, 'target': 'ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:23 compute-1 systemd[1]: run-netns-ovnmeta\x2d7e36da7d\x2d913d\x2d4101\x2da7c2\x2de1698abf35be.mount: Deactivated successfully.
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.397 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e36da7d-913d-4101-a7c2-e1698abf35be deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.398 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[268f9c8a-916c-4571-b7e7-ede349737ddf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.399 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 52ec2db5-2e22-45a7-92ee-f0e360776c10 in datapath 7e36da7d-913d-4101-a7c2-e1698abf35be unbound from our chassis#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.400 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e36da7d-913d-4101-a7c2-e1698abf35be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.400 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fe3a25-6955-4e76-9479-9aabe42e3088]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.401 71059 INFO neutron.agent.ovn.metadata.agent [-] Port 52ec2db5-2e22-45a7-92ee-f0e360776c10 in datapath 7e36da7d-913d-4101-a7c2-e1698abf35be unbound from our chassis#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.402 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e36da7d-913d-4101-a7c2-e1698abf35be, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:03:23 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:23.402 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[3c05aa04-ac5f-4933-8958-6730f2bfbb63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.459 2 DEBUG nova.compute.manager [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.459 2 DEBUG oslo_concurrency.lockutils [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.459 2 DEBUG oslo_concurrency.lockutils [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.460 2 DEBUG oslo_concurrency.lockutils [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.460 2 DEBUG nova.compute.manager [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.460 2 DEBUG nova.compute.manager [req-15875846-1b95-44e3-94b9-b92a0795f112 req-1c286d20-c08b-4e2b-8854-7f895cf89a28 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-unplugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.491 2 INFO nova.virt.libvirt.driver [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Deleting instance files /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113_del#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.492 2 INFO nova.virt.libvirt.driver [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Deletion of /var/lib/nova/instances/21bbcca2-5cec-4324-9af4-6d2090b6b113_del complete#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.526 2 INFO nova.compute.manager [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Took 0.48 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.526 2 DEBUG oslo.service.loopingcall [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.527 2 DEBUG nova.compute.manager [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 10:03:23 compute-1 nova_compute[162974]: 2025-10-09 10:03:23.527 2 DEBUG nova.network.neutron [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.260 2 DEBUG nova.compute.manager [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.260 2 DEBUG nova.compute.manager [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing instance network info cache due to event network-changed-52ec2db5-2e22-45a7-92ee-f0e360776c10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.260 2 DEBUG oslo_concurrency.lockutils [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.261 2 DEBUG oslo_concurrency.lockutils [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.261 2 DEBUG nova.network.neutron [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Refreshing network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:03:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:24.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.884 2 DEBUG nova.network.neutron [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.896 2 INFO nova.compute.manager [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Took 1.37 seconds to deallocate network for instance.#033[00m
Oct  9 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.930 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.930 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:24 compute-1 nova_compute[162974]: 2025-10-09 10:03:24.981 2 DEBUG oslo_concurrency.processutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:25.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:03:25 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2459753808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.330 2 DEBUG oslo_concurrency.processutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.335 2 DEBUG nova.compute.provider_tree [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.346 2 DEBUG nova.scheduler.client.report [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.367 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.393 2 INFO nova.scheduler.client.report [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance 21bbcca2-5cec-4324-9af4-6d2090b6b113#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.445 2 DEBUG oslo_concurrency.lockutils [None req-696c763a-67b4-4841-9bd5-2a4d47f44092 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.503 2 DEBUG nova.network.neutron [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updated VIF entry in instance network info cache for port 52ec2db5-2e22-45a7-92ee-f0e360776c10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.504 2 DEBUG nova.network.neutron [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Updating instance_info_cache with network_info: [{"id": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "address": "fa:16:3e:19:24:81", "network": {"id": "7e36da7d-913d-4101-a7c2-e1698abf35be", "bridge": "br-int", "label": "tempest-network-smoke--21347962", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52ec2db5-2e", "ovs_interfaceid": "52ec2db5-2e22-45a7-92ee-f0e360776c10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.521 2 DEBUG oslo_concurrency.lockutils [req-c0188d30-b879-4c83-b1c0-caa15892362a req-8a1da8cb-b8ee-4080-b94a-d162d22c3560 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-21bbcca2-5cec-4324-9af4-6d2090b6b113" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.548 2 DEBUG nova.compute.manager [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.548 2 DEBUG oslo_concurrency.lockutils [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.548 2 DEBUG oslo_concurrency.lockutils [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.549 2 DEBUG oslo_concurrency.lockutils [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "21bbcca2-5cec-4324-9af4-6d2090b6b113-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.549 2 DEBUG nova.compute.manager [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] No waiting events found dispatching network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:03:25 compute-1 nova_compute[162974]: 2025-10-09 10:03:25.549 2 WARNING nova.compute.manager [req-16dab8ea-d6b6-4e55-933a-ee07e2268f37 req-e4d27751-8415-4a9f-8bf6-45383bbd0058 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received unexpected event network-vif-plugged-52ec2db5-2e22-45a7-92ee-f0e360776c10 for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:03:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:26 compute-1 nova_compute[162974]: 2025-10-09 10:03:26.325 2 DEBUG nova.compute.manager [req-6afb7f98-58f2-467d-a4ed-b494b6a1bdf3 req-16ea68e0-622d-41f5-9c3f-61b1876903e0 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Received event network-vif-deleted-52ec2db5-2e22-45a7-92ee-f0e360776c10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 10:03:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:26.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 10:03:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:27.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:27 compute-1 podman[172159]: 2025-10-09 10:03:27.54224928 +0000 UTC m=+0.053141413 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:03:27 compute-1 nova_compute[162974]: 2025-10-09 10:03:27.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:27 compute-1 nova_compute[162974]: 2025-10-09 10:03:27.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:27 compute-1 nova_compute[162974]: 2025-10-09 10:03:27.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:28 compute-1 nova_compute[162974]: 2025-10-09 10:03:28.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:28.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:29.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:30.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:31.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:32 compute-1 nova_compute[162974]: 2025-10-09 10:03:32.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:33.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:33 compute-1 nova_compute[162974]: 2025-10-09 10:03:33.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:34.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:03:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:35.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:03:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:36.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:36 compute-1 podman[172208]: 2025-10-09 10:03:36.540611617 +0000 UTC m=+0.044342967 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 10:03:36 compute-1 podman[172207]: 2025-10-09 10:03:36.569385402 +0000 UTC m=+0.073997523 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 10:03:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:37.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:37 compute-1 nova_compute[162974]: 2025-10-09 10:03:37.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:38 compute-1 nova_compute[162974]: 2025-10-09 10:03:38.280 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004203.2788818, 21bbcca2-5cec-4324-9af4-6d2090b6b113 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:03:38 compute-1 nova_compute[162974]: 2025-10-09 10:03:38.280 2 INFO nova.compute.manager [-] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] VM Stopped (Lifecycle Event)#033[00m
Oct  9 10:03:38 compute-1 nova_compute[162974]: 2025-10-09 10:03:38.294 2 DEBUG nova.compute.manager [None req-2bdbf53d-9a8c-4257-91b8-dc4eedd773fc - - - - - -] [instance: 21bbcca2-5cec-4324-9af4-6d2090b6b113] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:03:38 compute-1 nova_compute[162974]: 2025-10-09 10:03:38.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:38.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:39.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:39 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.166 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.166 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.177 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.228 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.228 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.232 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.232 2 INFO nova.compute.claims [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Claim successful on node compute-1.ctlplane.example.com#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.297 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:03:40 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4048734863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.670 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.674 2 DEBUG nova.compute.provider_tree [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.684 2 DEBUG nova.scheduler.client.report [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.700 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.700 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.729 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.729 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.742 2 INFO nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.756 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 10:03:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.908 2 DEBUG nova.policy [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.912 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.913 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.913 2 INFO nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Creating image(s)#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.939 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.958 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.976 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:03:40 compute-1 nova_compute[162974]: 2025-10-09 10:03:40.979 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.024 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.025 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.026 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.026 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.045 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.047 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.195 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:41.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.254 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.329 2 DEBUG nova.objects.instance [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid 29f00e1c-dcdd-4a28-b141-a900eb34b836 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.339 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.340 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Ensure instance console log exists: /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.340 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.340 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.340 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:41 compute-1 nova_compute[162974]: 2025-10-09 10:03:41.351 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Successfully created port: a450260b-c4da-4f56-bf08-713a5ccc3d0e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.192 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Successfully updated port: a450260b-c4da-4f56-bf08-713a5ccc3d0e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.212 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.212 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.212 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.264 2 DEBUG nova.compute.manager [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.264 2 DEBUG nova.compute.manager [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing instance network info cache due to event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.264 2 DEBUG oslo_concurrency.lockutils [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.328 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 10:03:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.725 2 DEBUG nova.network.neutron [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.741 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.741 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Instance network_info: |[{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.742 2 DEBUG oslo_concurrency.lockutils [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.742 2 DEBUG nova.network.neutron [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.744 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Start _get_guest_xml network_info=[{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'size': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'guest_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'encrypted': False, 'encryption_format': None, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.747 2 WARNING nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.753 2 DEBUG nova.virt.libvirt.host [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.753 2 DEBUG nova.virt.libvirt.host [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.756 2 DEBUG nova.virt.libvirt.host [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.756 2 DEBUG nova.virt.libvirt.host [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.757 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.757 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.757 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.758 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.758 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.758 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.758 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.758 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.759 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.759 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.759 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.759 2 DEBUG nova.virt.hardware [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.761 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:42 compute-1 nova_compute[162974]: 2025-10-09 10:03:42.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:43 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:03:43 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4018809995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.108 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.126 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.129 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:43.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.459 2 DEBUG nova.network.neutron [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updated VIF entry in instance network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.460 2 DEBUG nova.network.neutron [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:43 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 10:03:43 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4110228176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.470 2 DEBUG oslo_concurrency.lockutils [req-61f4d1c2-0f77-4efb-855d-9beed646f556 req-377c49bf-6163-49b3-8a94-d9187450ee86 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.477 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.478 2 DEBUG nova.virt.libvirt.vif [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:03:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697662314',display_name='tempest-TestNetworkBasicOps-server-697662314',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697662314',id=13,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtKLJ6IG9u4a8nuHneFynw1vBGpmAOOthC0luN75md/pSNPLJ1OiBs1QaWTfRgLBRYBcOf7wBzJd4+LCaHfI9OClhJh7S3mGctEWrkgZF/O/aOkt4rBN7LklD620tBk2Q==',key_name='tempest-TestNetworkBasicOps-110254677',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-4alywc2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:03:40Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=29f00e1c-dcdd-4a28-b141-a900eb34b836,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.478 2 DEBUG nova.network.os_vif_util [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.479 2 DEBUG nova.network.os_vif_util [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.480 2 DEBUG nova.objects.instance [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29f00e1c-dcdd-4a28-b141-a900eb34b836 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.489 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] End _get_guest_xml xml=<domain type="kvm">
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <uuid>29f00e1c-dcdd-4a28-b141-a900eb34b836</uuid>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <name>instance-0000000d</name>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <memory>131072</memory>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <vcpu>1</vcpu>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <metadata>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <nova:name>tempest-TestNetworkBasicOps-server-697662314</nova:name>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <nova:creationTime>2025-10-09 10:03:42</nova:creationTime>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <nova:flavor name="m1.nano">
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <nova:memory>128</nova:memory>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <nova:disk>1</nova:disk>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <nova:swap>0</nova:swap>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <nova:vcpus>1</nova:vcpus>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      </nova:flavor>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <nova:owner>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      </nova:owner>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <nova:ports>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <nova:port uuid="a450260b-c4da-4f56-bf08-713a5ccc3d0e">
Oct  9 10:03:43 compute-1 nova_compute[162974]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        </nova:port>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      </nova:ports>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    </nova:instance>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  </metadata>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <sysinfo type="smbios">
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <system>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <entry name="manufacturer">RDO</entry>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <entry name="product">OpenStack Compute</entry>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <entry name="serial">29f00e1c-dcdd-4a28-b141-a900eb34b836</entry>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <entry name="uuid">29f00e1c-dcdd-4a28-b141-a900eb34b836</entry>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <entry name="family">Virtual Machine</entry>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    </system>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  </sysinfo>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <os>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <boot dev="hd"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <smbios mode="sysinfo"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  </os>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <features>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <acpi/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <apic/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <vmcoreinfo/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  </features>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <clock offset="utc">
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <timer name="hpet" present="no"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  </clock>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <cpu mode="host-model" match="exact">
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  </cpu>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  <devices>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <disk type="network" device="disk">
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/29f00e1c-dcdd-4a28-b141-a900eb34b836_disk">
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      </source>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      </auth>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <target dev="vda" bus="virtio"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    </disk>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <disk type="network" device="cdrom">
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <driver type="raw" cache="none"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <source protocol="rbd" name="vms/29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config">
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <host name="192.168.122.100" port="6789"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <host name="192.168.122.102" port="6789"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <host name="192.168.122.101" port="6789"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      </source>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <auth username="openstack">
Oct  9 10:03:43 compute-1 nova_compute[162974]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      </auth>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <target dev="sda" bus="sata"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    </disk>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <interface type="ethernet">
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <mac address="fa:16:3e:ac:02:fe"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <mtu size="1442"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <target dev="tapa450260b-c4"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    </interface>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <serial type="pty">
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <log file="/var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/console.log" append="off"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    </serial>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <video>
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <model type="virtio"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    </video>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <input type="tablet" bus="usb"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <rng model="virtio">
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <backend model="random">/dev/urandom</backend>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    </rng>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <controller type="usb" index="0"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    <memballoon model="virtio">
Oct  9 10:03:43 compute-1 nova_compute[162974]:      <stats period="10"/>
Oct  9 10:03:43 compute-1 nova_compute[162974]:    </memballoon>
Oct  9 10:03:43 compute-1 nova_compute[162974]:  </devices>
Oct  9 10:03:43 compute-1 nova_compute[162974]: </domain>
Oct  9 10:03:43 compute-1 nova_compute[162974]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.490 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Preparing to wait for external event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.491 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.491 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.491 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.492 2 DEBUG nova.virt.libvirt.vif [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T10:03:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697662314',display_name='tempest-TestNetworkBasicOps-server-697662314',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697662314',id=13,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtKLJ6IG9u4a8nuHneFynw1vBGpmAOOthC0luN75md/pSNPLJ1OiBs1QaWTfRgLBRYBcOf7wBzJd4+LCaHfI9OClhJh7S3mGctEWrkgZF/O/aOkt4rBN7LklD620tBk2Q==',key_name='tempest-TestNetworkBasicOps-110254677',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-4alywc2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T10:03:40Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=29f00e1c-dcdd-4a28-b141-a900eb34b836,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.492 2 DEBUG nova.network.os_vif_util [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.492 2 DEBUG nova.network.os_vif_util [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.493 2 DEBUG os_vif [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.493 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa450260b-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.496 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa450260b-c4, col_values=(('external_ids', {'iface-id': 'a450260b-c4da-4f56-bf08-713a5ccc3d0e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:02:fe', 'vm-uuid': '29f00e1c-dcdd-4a28-b141-a900eb34b836'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:43 compute-1 NetworkManager[982]: <info>  [1760004223.4980] manager: (tapa450260b-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.503 2 INFO os_vif [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4')#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.536 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.537 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.537 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:ac:02:fe, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.537 2 INFO nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Using config drive#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.554 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.748 2 INFO nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Creating config drive at /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.752 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9d6qlqge execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.871 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9d6qlqge" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.892 2 DEBUG nova.storage.rbd_utils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.895 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.975 2 DEBUG oslo_concurrency.processutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config 29f00e1c-dcdd-4a28-b141-a900eb34b836_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:43 compute-1 nova_compute[162974]: 2025-10-09 10:03:43.976 2 INFO nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Deleting local config drive /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836/disk.config because it was imported into RBD.#033[00m
Oct  9 10:03:44 compute-1 kernel: tapa450260b-c4: entered promiscuous mode
Oct  9 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.0136] manager: (tapa450260b-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:44 compute-1 ovn_controller[62080]: 2025-10-09T10:03:44Z|00098|binding|INFO|Claiming lport a450260b-c4da-4f56-bf08-713a5ccc3d0e for this chassis.
Oct  9 10:03:44 compute-1 ovn_controller[62080]: 2025-10-09T10:03:44Z|00099|binding|INFO|a450260b-c4da-4f56-bf08-713a5ccc3d0e: Claiming fa:16:3e:ac:02:fe 10.100.0.4
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.024 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:02:fe 10.100.0.4'], port_security=['fa:16:3e:ac:02:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29f00e1c-dcdd-4a28-b141-a900eb34b836', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a733533a-76c7-46e6-89e3-803597fe93b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '68293693-d770-49bf-b0b3-d26af71ce606', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0fc8e9a-af61-4397-9551-67e71824e91c, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=a450260b-c4da-4f56-bf08-713a5ccc3d0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.025 71059 INFO neutron.agent.ovn.metadata.agent [-] Port a450260b-c4da-4f56-bf08-713a5ccc3d0e in datapath a733533a-76c7-46e6-89e3-803597fe93b6 bound to our chassis#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.026 71059 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a733533a-76c7-46e6-89e3-803597fe93b6#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.035 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[74362613-bbf8-4ba2-af4d-a1d9d16eadc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.035 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa733533a-71 in ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.036 165637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa733533a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.036 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6495e9b4-d44d-441e-ab1f-5048118fe367]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.037 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[c65b6dd1-cbea-4a6b-9f8e-8a5190c0a6bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 systemd-udevd[172671]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.047 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0b4023-7743-4e47-a50c-fb8bfcd3365f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.0497] device (tapa450260b-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.0502] device (tapa450260b-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 10:03:44 compute-1 systemd-machined[120683]: New machine qemu-7-instance-0000000d.
Oct  9 10:03:44 compute-1 systemd[1]: Started Virtual Machine qemu-7-instance-0000000d.
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.068 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[524c918e-dad2-4d9f-8923-be7a0efc85a1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.091 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1b2d95-dc86-496a-abea-86c256a791a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_controller[62080]: 2025-10-09T10:03:44Z|00100|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e ovn-installed in OVS
Oct  9 10:03:44 compute-1 ovn_controller[62080]: 2025-10-09T10:03:44Z|00101|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e up in Southbound
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.0984] manager: (tapa733533a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.099 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd59227-c5d4-4241-a0a4-a4e2b6cb577a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.125 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[5567ecb0-fc23-47ec-87c7-b0ebe85c5142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.127 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[ced6e277-696b-47c1-bdf2-889cda6fb2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.1457] device (tapa733533a-70): carrier: link connected
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.149 165694 DEBUG oslo.privsep.daemon [-] privsep: reply[c52d8601-c391-405a-a010-404347fb3a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.160 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2daa3b-69a0-4c8e-9b87-c8513128c4eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa733533a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:b0:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 190771, 'reachable_time': 22880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 172696, 'error': None, 'target': 'ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.170 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5e0d80-5d46-405e-967e-ef808b652956]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:b04f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 190771, 'tstamp': 190771}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 172697, 'error': None, 'target': 'ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.182 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ef2715-61b4-4200-8d98-87ac8c5a02a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa733533a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:b0:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 190771, 'reachable_time': 22880, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 172698, 'error': None, 'target': 'ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.203 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[183a32e5-9c7c-4504-b5d5-cc16a4f187bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.241 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[db3f07b2-1a71-47c8-8959-712cf555b9ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.242 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa733533a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.242 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.243 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa733533a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:44 compute-1 kernel: tapa733533a-70: entered promiscuous mode
Oct  9 10:03:44 compute-1 NetworkManager[982]: <info>  [1760004224.2451] manager: (tapa733533a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.249 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa733533a-70, col_values=(('external_ids', {'iface-id': '56190dc5-983f-4623-a0ae-120f81d9f7de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:44 compute-1 ovn_controller[62080]: 2025-10-09T10:03:44Z|00102|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.253 71059 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a733533a-76c7-46e6-89e3-803597fe93b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a733533a-76c7-46e6-89e3-803597fe93b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.254 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfb086b-61cf-4f4a-97da-dc1549d143d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.254 71059 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: global
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    log         /dev/log local0 debug
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    log-tag     haproxy-metadata-proxy-a733533a-76c7-46e6-89e3-803597fe93b6
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    user        root
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    group       root
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    maxconn     1024
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    pidfile     /var/lib/neutron/external/pids/a733533a-76c7-46e6-89e3-803597fe93b6.pid.haproxy
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    daemon
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: defaults
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    log global
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    mode http
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    option httplog
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    option dontlognull
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    option http-server-close
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    option forwardfor
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    retries                 3
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    timeout http-request    30s
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    timeout connect         30s
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    timeout client          32s
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    timeout server          32s
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    timeout http-keep-alive 30s
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: listen listener
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    bind 169.254.169.254:80
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]:    http-request add-header X-OVN-Network-ID a733533a-76c7-46e6-89e3-803597fe93b6
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 10:03:44 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:03:44.255 71059 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6', 'env', 'PROCESS_TAG=haproxy-a733533a-76c7-46e6-89e3-803597fe93b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a733533a-76c7-46e6-89e3-803597fe93b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.322 2 DEBUG nova.compute.manager [req-1c5add77-d5aa-41d5-a445-79623590e2ad req-79664bef-13da-4291-9b42-6b78d42f11d3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.324 2 DEBUG oslo_concurrency.lockutils [req-1c5add77-d5aa-41d5-a445-79623590e2ad req-79664bef-13da-4291-9b42-6b78d42f11d3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.324 2 DEBUG oslo_concurrency.lockutils [req-1c5add77-d5aa-41d5-a445-79623590e2ad req-79664bef-13da-4291-9b42-6b78d42f11d3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.324 2 DEBUG oslo_concurrency.lockutils [req-1c5add77-d5aa-41d5-a445-79623590e2ad req-79664bef-13da-4291-9b42-6b78d42f11d3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.325 2 DEBUG nova.compute.manager [req-1c5add77-d5aa-41d5-a445-79623590e2ad req-79664bef-13da-4291-9b42-6b78d42f11d3 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Processing event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 10:03:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:44.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:44 compute-1 podman[172768]: 2025-10-09 10:03:44.549138215 +0000 UTC m=+0.032537350 container create 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  9 10:03:44 compute-1 systemd[1]: Started libpod-conmon-440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0.scope.
Oct  9 10:03:44 compute-1 systemd[1]: Started libcrun container.
Oct  9 10:03:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5124fff80fb25973ea1610265ddf668756c8cb9c257f51808250cf95045e143/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 10:03:44 compute-1 podman[172768]: 2025-10-09 10:03:44.599981656 +0000 UTC m=+0.083380790 container init 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:03:44 compute-1 podman[172768]: 2025-10-09 10:03:44.604885902 +0000 UTC m=+0.088285036 container start 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  9 10:03:44 compute-1 podman[172768]: 2025-10-09 10:03:44.534810064 +0000 UTC m=+0.018209208 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 10:03:44 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [NOTICE]   (172784) : New worker (172786) forked
Oct  9 10:03:44 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [NOTICE]   (172784) : Loading success.
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.726 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.727 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004224.7257159, 29f00e1c-dcdd-4a28-b141-a900eb34b836 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.727 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] VM Started (Lifecycle Event)#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.730 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.732 2 INFO nova.virt.libvirt.driver [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Instance spawned successfully.#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.732 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.742 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.746 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.750 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.750 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.750 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.751 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.751 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.752 2 DEBUG nova.virt.libvirt.driver [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.766 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.766 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004224.725822, 29f00e1c-dcdd-4a28-b141-a900eb34b836 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.766 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] VM Paused (Lifecycle Event)#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.784 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.786 2 DEBUG nova.virt.driver [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] Emitting event <LifecycleEvent: 1760004224.7295604, 29f00e1c-dcdd-4a28-b141-a900eb34b836 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.787 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] VM Resumed (Lifecycle Event)#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.799 2 INFO nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Took 3.89 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.799 2 DEBUG nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.800 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.806 2 DEBUG nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.833 2 INFO nova.compute.manager [None req-433ad811-a058-4508-a429-c5f40f506b5b - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.849 2 INFO nova.compute.manager [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Took 4.64 seconds to build instance.#033[00m
Oct  9 10:03:44 compute-1 nova_compute[162974]: 2025-10-09 10:03:44.858 2 DEBUG oslo_concurrency.lockutils [None req-41164a7a-99bb-4324-8ad5-28627246f8c8 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:45.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.395 2 DEBUG nova.compute.manager [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.396 2 DEBUG oslo_concurrency.lockutils [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.396 2 DEBUG oslo_concurrency.lockutils [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.396 2 DEBUG oslo_concurrency.lockutils [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.396 2 DEBUG nova.compute.manager [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] No waiting events found dispatching network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:03:46 compute-1 nova_compute[162974]: 2025-10-09 10:03:46.397 2 WARNING nova.compute.manager [req-a6df0fa2-43e1-4da7-9f46-51e3443e0a82 req-4c0728df-fe47-4eb6-a139-184b5c99491a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received unexpected event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e for instance with vm_state active and task_state None.#033[00m
Oct  9 10:03:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:46.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:47.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:47 compute-1 ovn_controller[62080]: 2025-10-09T10:03:47Z|00103|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct  9 10:03:47 compute-1 NetworkManager[982]: <info>  [1760004227.5131] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct  9 10:03:47 compute-1 NetworkManager[982]: <info>  [1760004227.5140] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct  9 10:03:47 compute-1 nova_compute[162974]: 2025-10-09 10:03:47.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:47 compute-1 ovn_controller[62080]: 2025-10-09T10:03:47Z|00104|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct  9 10:03:47 compute-1 nova_compute[162974]: 2025-10-09 10:03:47.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:47 compute-1 nova_compute[162974]: 2025-10-09 10:03:47.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:47 compute-1 nova_compute[162974]: 2025-10-09 10:03:47.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:48.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.446 2 DEBUG nova.compute.manager [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.447 2 DEBUG nova.compute.manager [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing instance network info cache due to event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.447 2 DEBUG oslo_concurrency.lockutils [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.447 2 DEBUG oslo_concurrency.lockutils [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.447 2 DEBUG nova.network.neutron [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:03:48 compute-1 nova_compute[162974]: 2025-10-09 10:03:48.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:48 compute-1 podman[172794]: 2025-10-09 10:03:48.553329074 +0000 UTC m=+0.064537409 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:03:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:49 compute-1 nova_compute[162974]: 2025-10-09 10:03:49.713 2 DEBUG nova.network.neutron [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updated VIF entry in instance network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:03:49 compute-1 nova_compute[162974]: 2025-10-09 10:03:49.714 2 DEBUG nova.network.neutron [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:49 compute-1 nova_compute[162974]: 2025-10-09 10:03:49.728 2 DEBUG oslo_concurrency.lockutils [req-fe97a5e4-40c0-4661-be33-04e24acb1a43 req-23cf0ba0-ee02-4cb6-905a-31dc5e902e74 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:03:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:50.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:51.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:52.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:52 compute-1 nova_compute[162974]: 2025-10-09 10:03:52.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:53.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:53 compute-1 nova_compute[162974]: 2025-10-09 10:03:53.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:54.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.131 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.131 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.132 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.132 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.132 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:55.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:03:55 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4083760399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.480 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.522 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.522 2 DEBUG nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.724 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.725 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4841MB free_disk=59.96738052368164GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.725 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.725 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:55 compute-1 ovn_controller[62080]: 2025-10-09T10:03:55Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:02:fe 10.100.0.4
Oct  9 10:03:55 compute-1 ovn_controller[62080]: 2025-10-09T10:03:55Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:02:fe 10.100.0.4
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.773 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Instance 29f00e1c-dcdd-4a28-b141-a900eb34b836 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.773 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.773 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:03:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:55 compute-1 nova_compute[162974]: 2025-10-09 10:03:55.797 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:03:56 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/139525667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:03:56 compute-1 nova_compute[162974]: 2025-10-09 10:03:56.136 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:56 compute-1 nova_compute[162974]: 2025-10-09 10:03:56.140 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:03:56 compute-1 nova_compute[162974]: 2025-10-09 10:03:56.153 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:03:56 compute-1 nova_compute[162974]: 2025-10-09 10:03:56.165 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:03:56 compute-1 nova_compute[162974]: 2025-10-09 10:03:56.166 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.440s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:56.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:57 compute-1 nova_compute[162974]: 2025-10-09 10:03:57.165 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:57 compute-1 nova_compute[162974]: 2025-10-09 10:03:57.166 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:57 compute-1 nova_compute[162974]: 2025-10-09 10:03:57.166 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:57 compute-1 nova_compute[162974]: 2025-10-09 10:03:57.166 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:03:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:57.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:57 compute-1 nova_compute[162974]: 2025-10-09 10:03:57.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.225 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.226 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquired lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.226 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  9 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.226 2 DEBUG nova.objects.instance [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 29f00e1c-dcdd-4a28-b141-a900eb34b836 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:03:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:58.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:58 compute-1 nova_compute[162974]: 2025-10-09 10:03:58.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:58 compute-1 podman[172892]: 2025-10-09 10:03:58.53218059 +0000 UTC m=+0.042938940 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid)
Oct  9 10:03:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:03:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:59.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.354 2 DEBUG nova.network.neutron [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.365 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Releasing lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.366 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  9 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.366 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.366 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:59 compute-1 nova_compute[162974]: 2025-10-09 10:03:59.366 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:00.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:01.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:02 compute-1 nova_compute[162974]: 2025-10-09 10:04:02.029 2 INFO nova.compute.manager [None req-6b62f797-84dc-40aa-9241-278bd198f44a 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Get console output#033[00m
Oct  9 10:04:02 compute-1 nova_compute[162974]: 2025-10-09 10:04:02.033 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 10:04:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:02.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:02 compute-1 ovn_controller[62080]: 2025-10-09T10:04:02Z|00105|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct  9 10:04:02 compute-1 nova_compute[162974]: 2025-10-09 10:04:02.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:02 compute-1 ovn_controller[62080]: 2025-10-09T10:04:02Z|00106|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct  9 10:04:02 compute-1 nova_compute[162974]: 2025-10-09 10:04:02.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:02 compute-1 nova_compute[162974]: 2025-10-09 10:04:02.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:03.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:03 compute-1 nova_compute[162974]: 2025-10-09 10:04:03.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:03 compute-1 nova_compute[162974]: 2025-10-09 10:04:03.594 2 INFO nova.compute.manager [None req-1c13b45b-b245-474c-97cb-d987f2d83afc 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Get console output#033[00m
Oct  9 10:04:03 compute-1 nova_compute[162974]: 2025-10-09 10:04:03.598 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 10:04:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:04.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:04 compute-1 NetworkManager[982]: <info>  [1760004244.4412] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Oct  9 10:04:04 compute-1 nova_compute[162974]: 2025-10-09 10:04:04.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:04 compute-1 NetworkManager[982]: <info>  [1760004244.4418] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Oct  9 10:04:04 compute-1 nova_compute[162974]: 2025-10-09 10:04:04.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:04 compute-1 ovn_controller[62080]: 2025-10-09T10:04:04Z|00107|binding|INFO|Releasing lport 56190dc5-983f-4623-a0ae-120f81d9f7de from this chassis (sb_readonly=0)
Oct  9 10:04:04 compute-1 nova_compute[162974]: 2025-10-09 10:04:04.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:04 compute-1 nova_compute[162974]: 2025-10-09 10:04:04.617 2 INFO nova.compute.manager [None req-64d42f4c-0be6-4a6b-845d-545a6eabdd53 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Get console output#033[00m
Oct  9 10:04:04 compute-1 nova_compute[162974]: 2025-10-09 10:04:04.619 1023 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.002 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.003 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.173 2 DEBUG nova.compute.manager [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.173 2 DEBUG nova.compute.manager [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing instance network info cache due to event network-changed-a450260b-c4da-4f56-bf08-713a5ccc3d0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.173 2 DEBUG oslo_concurrency.lockutils [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.173 2 DEBUG oslo_concurrency.lockutils [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.174 2 DEBUG nova.network.neutron [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Refreshing network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.216 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.216 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.216 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.217 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.217 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.218 2 INFO nova.compute.manager [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Terminating instance#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.219 2 DEBUG nova.compute.manager [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 10:04:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:05.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:05 compute-1 kernel: tapa450260b-c4 (unregistering): left promiscuous mode
Oct  9 10:04:05 compute-1 NetworkManager[982]: <info>  [1760004245.2574] device (tapa450260b-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00108|binding|INFO|Releasing lport a450260b-c4da-4f56-bf08-713a5ccc3d0e from this chassis (sb_readonly=0)
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00109|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e down in Southbound
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00110|binding|INFO|Removing iface tapa450260b-c4 ovn-installed in OVS
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.270 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:02:fe 10.100.0.4'], port_security=['fa:16:3e:ac:02:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29f00e1c-dcdd-4a28-b141-a900eb34b836', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a733533a-76c7-46e6-89e3-803597fe93b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '68293693-d770-49bf-b0b3-d26af71ce606', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0fc8e9a-af61-4397-9551-67e71824e91c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=a450260b-c4da-4f56-bf08-713a5ccc3d0e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.271 71059 INFO neutron.agent.ovn.metadata.agent [-] Port a450260b-c4da-4f56-bf08-713a5ccc3d0e in datapath a733533a-76c7-46e6-89e3-803597fe93b6 unbound from our chassis#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.272 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a733533a-76c7-46e6-89e3-803597fe93b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.273 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[0458a028-735b-424a-80bd-6eaa46eba675]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.274 71059 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6 namespace which is not needed anymore#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct  9 10:04:05 compute-1 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000d.scope: Consumed 10.866s CPU time.
Oct  9 10:04:05 compute-1 systemd-machined[120683]: Machine qemu-7-instance-0000000d terminated.
Oct  9 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [NOTICE]   (172784) : haproxy version is 2.8.14-c23fe91
Oct  9 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [NOTICE]   (172784) : path to executable is /usr/sbin/haproxy
Oct  9 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [WARNING]  (172784) : Exiting Master process...
Oct  9 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [WARNING]  (172784) : Exiting Master process...
Oct  9 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [ALERT]    (172784) : Current worker (172786) exited with code 143 (Terminated)
Oct  9 10:04:05 compute-1 neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6[172779]: [WARNING]  (172784) : All workers exited. Exiting... (0)
Oct  9 10:04:05 compute-1 systemd[1]: libpod-440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0.scope: Deactivated successfully.
Oct  9 10:04:05 compute-1 podman[172935]: 2025-10-09 10:04:05.382092196 +0000 UTC m=+0.033962658 container died 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:04:05 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0-userdata-shm.mount: Deactivated successfully.
Oct  9 10:04:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-e5124fff80fb25973ea1610265ddf668756c8cb9c257f51808250cf95045e143-merged.mount: Deactivated successfully.
Oct  9 10:04:05 compute-1 podman[172935]: 2025-10-09 10:04:05.410651838 +0000 UTC m=+0.062522299 container cleanup 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:04:05 compute-1 systemd[1]: libpod-conmon-440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0.scope: Deactivated successfully.
Oct  9 10:04:05 compute-1 kernel: tapa450260b-c4: entered promiscuous mode
Oct  9 10:04:05 compute-1 NetworkManager[982]: <info>  [1760004245.4311] manager: (tapa450260b-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00111|binding|INFO|Claiming lport a450260b-c4da-4f56-bf08-713a5ccc3d0e for this chassis.
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00112|binding|INFO|a450260b-c4da-4f56-bf08-713a5ccc3d0e: Claiming fa:16:3e:ac:02:fe 10.100.0.4
Oct  9 10:04:05 compute-1 kernel: tapa450260b-c4 (unregistering): left promiscuous mode
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.439 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:02:fe 10.100.0.4'], port_security=['fa:16:3e:ac:02:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29f00e1c-dcdd-4a28-b141-a900eb34b836', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a733533a-76c7-46e6-89e3-803597fe93b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '68293693-d770-49bf-b0b3-d26af71ce606', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0fc8e9a-af61-4397-9551-67e71824e91c, chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=a450260b-c4da-4f56-bf08-713a5ccc3d0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.451 2 INFO nova.virt.libvirt.driver [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Instance destroyed successfully.#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.451 2 DEBUG nova.objects.instance [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid 29f00e1c-dcdd-4a28-b141-a900eb34b836 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00113|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e ovn-installed in OVS
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00114|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e up in Southbound
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00115|binding|INFO|Releasing lport a450260b-c4da-4f56-bf08-713a5ccc3d0e from this chassis (sb_readonly=1)
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00116|if_status|INFO|Not setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e down as sb is readonly
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00117|binding|INFO|Removing iface tapa450260b-c4 ovn-installed in OVS
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00118|binding|INFO|Releasing lport a450260b-c4da-4f56-bf08-713a5ccc3d0e from this chassis (sb_readonly=0)
Oct  9 10:04:05 compute-1 ovn_controller[62080]: 2025-10-09T10:04:05Z|00119|binding|INFO|Setting lport a450260b-c4da-4f56-bf08-713a5ccc3d0e down in Southbound
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.466 2 DEBUG nova.virt.libvirt.vif [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T10:03:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-697662314',display_name='tempest-TestNetworkBasicOps-server-697662314',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-697662314',id=13,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtKLJ6IG9u4a8nuHneFynw1vBGpmAOOthC0luN75md/pSNPLJ1OiBs1QaWTfRgLBRYBcOf7wBzJd4+LCaHfI9OClhJh7S3mGctEWrkgZF/O/aOkt4rBN7LklD620tBk2Q==',key_name='tempest-TestNetworkBasicOps-110254677',keypairs=<?>,launch_index=0,launched_at=2025-10-09T10:03:44Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-4alywc2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T10:03:44Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=29f00e1c-dcdd-4a28-b141-a900eb34b836,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.466 2 DEBUG nova.network.os_vif_util [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.467 2 DEBUG nova.network.os_vif_util [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.467 2 DEBUG os_vif [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa450260b-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.470 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:02:fe 10.100.0.4'], port_security=['fa:16:3e:ac:02:fe 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '29f00e1c-dcdd-4a28-b141-a900eb34b836', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a733533a-76c7-46e6-89e3-803597fe93b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '68293693-d770-49bf-b0b3-d26af71ce606', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0fc8e9a-af61-4397-9551-67e71824e91c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>], logical_port=a450260b-c4da-4f56-bf08-713a5ccc3d0e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcc797b4850>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:04:05 compute-1 podman[172960]: 2025-10-09 10:04:05.473445195 +0000 UTC m=+0.047782140 container remove 440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.475 2 INFO os_vif [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:02:fe,bridge_name='br-int',has_traffic_filtering=True,id=a450260b-c4da-4f56-bf08-713a5ccc3d0e,network=Network(a733533a-76c7-46e6-89e3-803597fe93b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa450260b-c4')#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.482 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[82ed49b1-d13d-4012-becc-71a21eff88aa]: (4, ('Thu Oct  9 10:04:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6 (440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0)\n440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0\nThu Oct  9 10:04:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6 (440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0)\n440589cb5a0cc730319e8d32eaa82acc9bc21e03ecfcd61daeee39ce7d4698b0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.483 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[6dec85df-2eeb-413c-8baf-e93272c8978c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.484 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa733533a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:04:05 compute-1 kernel: tapa733533a-70: left promiscuous mode
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.502 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[de9fa4d1-09f5-44b7-a93d-13b48ef9f74a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.517 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[e502466b-c8c8-410b-ac93-d2199c15c99c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.519 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[9e835a0c-9e76-4a80-a268-466216f59254]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.532 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[f689ef73-c3e8-43c6-a22e-3da92bc68e4e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 190765, 'reachable_time': 21811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 172993, 'error': None, 'target': 'ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.534 71273 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a733533a-76c7-46e6-89e3-803597fe93b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.534 71273 DEBUG oslo.privsep.daemon [-] privsep: reply[57191f81-4250-4832-b432-f6907a0887a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:04:05 compute-1 systemd[1]: run-netns-ovnmeta\x2da733533a\x2d76c7\x2d46e6\x2d89e3\x2d803597fe93b6.mount: Deactivated successfully.
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.535 71059 INFO neutron.agent.ovn.metadata.agent [-] Port a450260b-c4da-4f56-bf08-713a5ccc3d0e in datapath a733533a-76c7-46e6-89e3-803597fe93b6 unbound from our chassis#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.536 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a733533a-76c7-46e6-89e3-803597fe93b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.536 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[a4d9192a-1600-471e-a27b-5643cb1d9600]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.537 71059 INFO neutron.agent.ovn.metadata.agent [-] Port a450260b-c4da-4f56-bf08-713a5ccc3d0e in datapath a733533a-76c7-46e6-89e3-803597fe93b6 unbound from our chassis#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.538 71059 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a733533a-76c7-46e6-89e3-803597fe93b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:04:05 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:05.538 165637 DEBUG oslo.privsep.daemon [-] privsep: reply[d907335e-c4f7-4ede-86b0-025c627aa913]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.646 2 INFO nova.virt.libvirt.driver [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Deleting instance files /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836_del#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.647 2 INFO nova.virt.libvirt.driver [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Deletion of /var/lib/nova/instances/29f00e1c-dcdd-4a28-b141-a900eb34b836_del complete#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.688 2 INFO nova.compute.manager [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.688 2 DEBUG oslo.service.loopingcall [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.689 2 DEBUG nova.compute.manager [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 10:04:05 compute-1 nova_compute[162974]: 2025-10-09 10:04:05.689 2 DEBUG nova.network.neutron [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 10:04:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:06 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:06.005 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:04:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:06.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.504 2 DEBUG nova.network.neutron [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.515 2 INFO nova.compute.manager [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Took 0.83 seconds to deallocate network for instance.#033[00m
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.546 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.547 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.584 2 DEBUG nova.compute.manager [req-ea3375da-de79-4837-bcc6-19319315c01d req-c967bb49-92a5-47b0-93a4-b0574e425cfe b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-deleted-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.598 2 DEBUG oslo_concurrency.processutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:04:06 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:04:06 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2428919325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.945 2 DEBUG oslo_concurrency.processutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.949 2 DEBUG nova.compute.provider_tree [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.964 2 DEBUG nova.scheduler.client.report [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.982 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:06 compute-1 nova_compute[162974]: 2025-10-09 10:04:06.998 2 INFO nova.scheduler.client.report [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance 29f00e1c-dcdd-4a28-b141-a900eb34b836#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.040 2 DEBUG nova.network.neutron [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updated VIF entry in instance network info cache for port a450260b-c4da-4f56-bf08-713a5ccc3d0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.041 2 DEBUG nova.network.neutron [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Updating instance_info_cache with network_info: [{"id": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "address": "fa:16:3e:ac:02:fe", "network": {"id": "a733533a-76c7-46e6-89e3-803597fe93b6", "bridge": "br-int", "label": "tempest-network-smoke--384651779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa450260b-c4", "ovs_interfaceid": "a450260b-c4da-4f56-bf08-713a5ccc3d0e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.051 2 DEBUG oslo_concurrency.lockutils [None req-eb719e6f-abfd-4b0e-9dfe-8c8e91b8f214 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.058 2 DEBUG oslo_concurrency.lockutils [req-e7caa62b-5947-4da4-877b-e3d270d8e91a req-936bf328-3034-4246-b881-85aff15d9526 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-29f00e1c-dcdd-4a28-b141-a900eb34b836" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.250 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-unplugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.250 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.251 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.251 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.251 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] No waiting events found dispatching network-vif-unplugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.251 2 WARNING nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received unexpected event network-vif-unplugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:04:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:07.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.252 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.252 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.252 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.253 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.253 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] No waiting events found dispatching network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.253 2 WARNING nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received unexpected event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.253 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.254 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.254 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.254 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.254 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] No waiting events found dispatching network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.255 2 WARNING nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received unexpected event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.255 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.255 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.256 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.256 2 DEBUG oslo_concurrency.lockutils [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "29f00e1c-dcdd-4a28-b141-a900eb34b836-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.256 2 DEBUG nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] No waiting events found dispatching network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.256 2 WARNING nova.compute.manager [req-a099e704-0e3e-4077-9237-0b5fd1699d47 req-515f2d28-0082-42fa-9f6d-b2406e3d980e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Received unexpected event network-vif-plugged-a450260b-c4da-4f56-bf08-713a5ccc3d0e for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:04:07 compute-1 podman[173020]: 2025-10-09 10:04:07.559327691 +0000 UTC m=+0.056856636 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3)
Oct  9 10:04:07 compute-1 podman[173019]: 2025-10-09 10:04:07.576381291 +0000 UTC m=+0.081970091 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  9 10:04:07 compute-1 nova_compute[162974]: 2025-10-09 10:04:07.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:08.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:09.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:09 compute-1 nova_compute[162974]: 2025-10-09 10:04:09.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:09 compute-1 nova_compute[162974]: 2025-10-09 10:04:09.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:10.042 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:10.043 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:04:10.043 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:10.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:10 compute-1 nova_compute[162974]: 2025-10-09 10:04:10.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:11.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:12.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:12 compute-1 nova_compute[162974]: 2025-10-09 10:04:12.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:13.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:14.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:15.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:15 compute-1 nova_compute[162974]: 2025-10-09 10:04:15.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:16.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:17.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:17 compute-1 nova_compute[162974]: 2025-10-09 10:04:17.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:18.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:19.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:19 compute-1 podman[173084]: 2025-10-09 10:04:19.570181094 +0000 UTC m=+0.080980526 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:04:20 compute-1 nova_compute[162974]: 2025-10-09 10:04:20.450 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004245.4491017, 29f00e1c-dcdd-4a28-b141-a900eb34b836 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:04:20 compute-1 nova_compute[162974]: 2025-10-09 10:04:20.451 2 INFO nova.compute.manager [-] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] VM Stopped (Lifecycle Event)#033[00m
Oct  9 10:04:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:20.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:20 compute-1 nova_compute[162974]: 2025-10-09 10:04:20.463 2 DEBUG nova.compute.manager [None req-1cc77880-e6ac-40e5-87e8-2e77630f4495 - - - - - -] [instance: 29f00e1c-dcdd-4a28-b141-a900eb34b836] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:04:20 compute-1 nova_compute[162974]: 2025-10-09 10:04:20.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:21.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:22.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:22 compute-1 nova_compute[162974]: 2025-10-09 10:04:22.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:23.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:24.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:25.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:25 compute-1 nova_compute[162974]: 2025-10-09 10:04:25.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:26.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:27.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:27 compute-1 nova_compute[162974]: 2025-10-09 10:04:27.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:28.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:29.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:29 compute-1 podman[173112]: 2025-10-09 10:04:29.525157179 +0000 UTC m=+0.038114074 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  9 10:04:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:30.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:30 compute-1 nova_compute[162974]: 2025-10-09 10:04:30.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:31.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:32.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:32 compute-1 nova_compute[162974]: 2025-10-09 10:04:32.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:33.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:34.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:35.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:35 compute-1 nova_compute[162974]: 2025-10-09 10:04:35.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.845369) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275845392, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2350, "num_deletes": 251, "total_data_size": 6083901, "memory_usage": 6164256, "flush_reason": "Manual Compaction"}
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275854515, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3946077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25943, "largest_seqno": 28288, "table_properties": {"data_size": 3936844, "index_size": 5727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19559, "raw_average_key_size": 20, "raw_value_size": 3918125, "raw_average_value_size": 4051, "num_data_blocks": 252, "num_entries": 967, "num_filter_entries": 967, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004069, "oldest_key_time": 1760004069, "file_creation_time": 1760004275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 9168 microseconds, and 6029 cpu microseconds.
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854538) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3946077 bytes OK
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854548) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854832) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854842) EVENT_LOG_v1 {"time_micros": 1760004275854839, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854852) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6073476, prev total WAL file size 6073476, number of live WAL files 2.
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.855755) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3853KB)], [51(11MB)]
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275855778, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16286126, "oldest_snapshot_seqno": -1}
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5801 keys, 14127090 bytes, temperature: kUnknown
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275891013, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14127090, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14087878, "index_size": 23614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 147447, "raw_average_key_size": 25, "raw_value_size": 13982440, "raw_average_value_size": 2410, "num_data_blocks": 962, "num_entries": 5801, "num_filter_entries": 5801, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.891301) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14127090 bytes
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.894350) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 460.1 rd, 399.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6317, records dropped: 516 output_compression: NoCompression
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.894364) EVENT_LOG_v1 {"time_micros": 1760004275894357, "job": 30, "event": "compaction_finished", "compaction_time_micros": 35400, "compaction_time_cpu_micros": 19942, "output_level": 6, "num_output_files": 1, "total_output_size": 14127090, "num_input_records": 6317, "num_output_records": 5801, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275895361, "job": 30, "event": "table_file_deletion", "file_number": 53}
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275897415, "job": 30, "event": "table_file_deletion", "file_number": 51}
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.855668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.897508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.897510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.897512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.897513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:04:35.897514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:36.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:37.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:37 compute-1 nova_compute[162974]: 2025-10-09 10:04:37.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:38.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:38 compute-1 podman[173158]: 2025-10-09 10:04:38.529335148 +0000 UTC m=+0.039967088 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 10:04:38 compute-1 podman[173159]: 2025-10-09 10:04:38.53927653 +0000 UTC m=+0.046357755 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Oct  9 10:04:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:39.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:40 compute-1 ovn_controller[62080]: 2025-10-09T10:04:40Z|00120|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct  9 10:04:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:40.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:40 compute-1 nova_compute[162974]: 2025-10-09 10:04:40.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:41.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:42.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:42 compute-1 nova_compute[162974]: 2025-10-09 10:04:42.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:43.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:43 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:04:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:44.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:45.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:45 compute-1 nova_compute[162974]: 2025-10-09 10:04:45.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:46.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:47.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:47 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:47 compute-1 nova_compute[162974]: 2025-10-09 10:04:47.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:48.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:49.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:50 compute-1 nova_compute[162974]: 2025-10-09 10:04:50.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:50.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:50 compute-1 podman[173301]: 2025-10-09 10:04:50.565058528 +0000 UTC m=+0.069027252 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  9 10:04:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:51.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:04:51 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 3773 syncs, 3.36 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3402 writes, 12K keys, 3402 commit groups, 1.0 writes per commit group, ingest: 13.97 MB, 0.02 MB/s#012Interval WAL: 3402 writes, 1492 syncs, 2.28 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  9 10:04:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:52.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:52 compute-1 nova_compute[162974]: 2025-10-09 10:04:52.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:53.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:54 compute-1 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 10:04:54 compute-1 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 10:04:54 compute-1 systemd-logind[798]: New session 40 of user zuul.
Oct  9 10:04:54 compute-1 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 10:04:54 compute-1 systemd[1]: Starting User Manager for UID 1000...
Oct  9 10:04:54 compute-1 systemd[173355]: Queued start job for default target Main User Target.
Oct  9 10:04:54 compute-1 systemd[173355]: Created slice User Application Slice.
Oct  9 10:04:54 compute-1 systemd[173355]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:04:54 compute-1 systemd[173355]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 10:04:54 compute-1 systemd[173355]: Reached target Paths.
Oct  9 10:04:54 compute-1 systemd[173355]: Reached target Timers.
Oct  9 10:04:54 compute-1 systemd[173355]: Starting D-Bus User Message Bus Socket...
Oct  9 10:04:54 compute-1 systemd[173355]: Starting Create User's Volatile Files and Directories...
Oct  9 10:04:54 compute-1 systemd[173355]: Finished Create User's Volatile Files and Directories.
Oct  9 10:04:54 compute-1 systemd[173355]: Listening on D-Bus User Message Bus Socket.
Oct  9 10:04:54 compute-1 systemd[173355]: Reached target Sockets.
Oct  9 10:04:54 compute-1 systemd[173355]: Reached target Basic System.
Oct  9 10:04:54 compute-1 systemd[173355]: Reached target Main User Target.
Oct  9 10:04:54 compute-1 systemd[173355]: Startup finished in 106ms.
Oct  9 10:04:54 compute-1 systemd[1]: Started User Manager for UID 1000.
Oct  9 10:04:54 compute-1 systemd[1]: Started Session 40 of User zuul.
Oct  9 10:04:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:54.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:55.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:55 compute-1 nova_compute[162974]: 2025-10-09 10:04:55.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.134 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.134 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.150 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.150 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.150 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.150 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.151 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:04:56 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:04:56 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4131648813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.502 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:04:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:56.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.730 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.731 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4963MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.731 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.732 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.806 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.806 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:04:56 compute-1 nova_compute[162974]: 2025-10-09 10:04:56.828 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:04:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:04:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/204166457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.166 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.170 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:04:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 10:04:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/33619624' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 10:04:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:57.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.364 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.384 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.385 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:57 compute-1 nova_compute[162974]: 2025-10-09 10:04:57.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.365 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.365 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.365 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.365 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.376 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.376 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.376 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:58 compute-1 nova_compute[162974]: 2025-10-09 10:04:58.377 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:04:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:58.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:59 compute-1 nova_compute[162974]: 2025-10-09 10:04:59.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:59 compute-1 nova_compute[162974]: 2025-10-09 10:04:59.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:04:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:59.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:59 compute-1 ovs-vsctl[173704]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  9 10:05:00 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  9 10:05:00 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  9 10:05:00 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  9 10:05:00 compute-1 podman[173880]: 2025-10-09 10:05:00.418233576 +0000 UTC m=+0.058208856 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:05:00 compute-1 nova_compute[162974]: 2025-10-09 10:05:00.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:00.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:00 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: cache status {prefix=cache status} (starting...)
Oct  9 10:05:00 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:00 compute-1 lvm[174020]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:05:00 compute-1 lvm[174020]: VG ceph_vg0 finished
Oct  9 10:05:00 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: client ls {prefix=client ls} (starting...)
Oct  9 10:05:00 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:00 compute-1 kernel: block vda: the capability attribute has been deprecated.
Oct  9 10:05:01 compute-1 nova_compute[162974]: 2025-10-09 10:05:01.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:01 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:05:01 compute-1 rsyslogd[1241]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: damage ls {prefix=damage ls} (starting...)
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:01.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump loads {prefix=dump loads} (starting...)
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:05:01 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1761942395' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:05:01 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3126869266' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  9 10:05:01 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct  9 10:05:01 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3069884087' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  9 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  9 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  9 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:05:02 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1937344101' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: ops {prefix=ops} (starting...)
Oct  9 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:02.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct  9 10:05:02 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/108986200' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  9 10:05:02 compute-1 nova_compute[162974]: 2025-10-09 10:05:02.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:05:02 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2685102981' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:05:02 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:05:02 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3857343826' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: session ls {prefix=session ls} (starting...)
Oct  9 10:05:02 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:05:03 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: status {prefix=status} (starting...)
Oct  9 10:05:03 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:05:03 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1643024054' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:05:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:03.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:03 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:05:03 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/406982026' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:05:03 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:05:03 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2541611345' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:05:03 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct  9 10:05:03 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1522824198' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  9 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  9 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3984711169' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  9 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  9 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4057339861' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  9 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct  9 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/668757568' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  9 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct  9 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/64288757' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  9 10:05:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:04.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  9 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3256037916' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  9 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct  9 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2889519608' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  9 10:05:04 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct  9 10:05:04 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3026430711' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  9 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2521577722' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct  9 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002817725' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  9 10:05:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:05.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:05 compute-1 nova_compute[162974]: 2025-10-09 10:05:05.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1164563443' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  9 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/62328478' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  9 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2194491156' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct  9 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670162803' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  9 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:05:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1284909676' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 81 heartbeat osd_stat(store_statfs(0x4fcaaa000/0x0/0x4ffc00000, data 0xf7546/0x16f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 81 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=0 lpr=81 pi=[60,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.829947 2 0.000040
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=0 lpr=81 pi=[60,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.830111 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=0 lpr=81 pi=[60,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.830134 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=0 lpr=81 pi=[60,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000052 1 0.000089
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.830132 2 0.000026
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.830229 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.830467 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=0 lpr=81 pi=[61,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000122 1 0.000368
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000112 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.985656 2 0.000044
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.985770 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.985784 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000035 1 0.000060
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.1a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.985156 2 0.000150
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.985395 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.985542 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=79) [0] r=0 lpr=81 pi=[62,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000051 1 0.000072
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000080 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[10.a( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.990245 7 0.000054
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.010673 2 0.000068
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.010722 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000062 1 0.000090
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 DELETING pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.008442 2 0.000146
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.008557 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 82 pg[6.b( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=1 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=81) [1] r=-1 lpr=81 pi=[61,81)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started 1.009599 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 5726208 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.003266 6 0.000283
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.002763 6 0.000108
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 crt=40'1059 mlcod 0'0 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.004300 6 0.000227
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.005224 6 0.000027
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 40'220 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002210 3 0.000223
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 40'220 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 40'220 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000163 1 0.000054
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 lc 40'220 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002160 3 0.000052
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.063973 1 0.000041
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.063721 1 0.000027
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 lc 40'529 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 40'312 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.066622 3 0.000123
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 40'312 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 4571136 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 722256 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.014849 1 0.000028
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 40'403 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.080712 3 0.000109
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 40'403 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 40'312 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.014776 1 0.000227
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 lc 40'312 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028655 1 0.000152
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 40'403 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.028748 1 0.000025
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 lc 40'403 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028923 1 0.000029
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 83 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.877427 1 0.000019
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.015892 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.021150 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[60,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000064 1 0.000106
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000035
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.935508 1 0.000045
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.016315 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.020769 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] r=-1 lpr=82 pi=[61,82)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000051
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000016 1 0.000026
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.906811 1 0.000038
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.017051 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.020604 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000027 1 0.000045
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000039
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.950788 1 0.000090
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.017298 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.020200 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=82) [0]/[1] r=-1 lpr=82 pi=[62,82)/2 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000060 1 0.000089
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000040
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001704 3 0.000020
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=24
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=24
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002143 3 0.000046
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001867 3 0.000021
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=52
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=52
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001607 3 0.000040
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 84 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79273984 unmapped: 4571136 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 85 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003913 2 0.000039
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.005856 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004278 2 0.000049
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006036 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004361 2 0.000044
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006045 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004600 2 0.000040
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006809 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000955 3 0.000111
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000110 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/62 les/c/f=83/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001211 3 0.000095
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.a( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/62 les/c/f=85/63/0 sis=84) [0] r=0 lpr=84 pi=[62,84)/2 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/61 les/c/f=85/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001815 4 0.000094
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/61 les/c/f=85/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/61 les/c/f=85/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/61 les/c/f=85/62/0 sis=84) [0] r=0 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/60 les/c/f=85/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001311 4 0.000180
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/60 les/c/f=85/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/60 les/c/f=85/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=6 ec=53/34 lis/c=84/60 les/c/f=85/61/0 sis=84) [0] r=0 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79290368 unmapped: 4554752 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 85 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79306752 unmapped: 4538368 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 85 heartbeat osd_stat(store_statfs(0x4fca98000/0x0/0x4ffc00000, data 0x1019a5/0x182000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.634215355s of 10.720481873s, submitted: 146
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79323136 unmapped: 4521984 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79331328 unmapped: 4513792 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744623 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79347712 unmapped: 4497408 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79355904 unmapped: 4489216 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c(unlocked)] enter Initial
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=0 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000046 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=0 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000021
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000163 1 0.000113
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000198 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c(unlocked)] enter Initial
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=0 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000076 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=0 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000031
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000116 1 0.000050
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000044 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000182 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d(unlocked)] enter Initial
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=0 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000059 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=0 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000224 1 0.000027
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000179 1 0.000260
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000028 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000219 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d(unlocked)] enter Initial
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=0 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000081 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=0 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000015
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000032
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000014 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000078 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 87 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79372288 unmapped: 4472832 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.446883 2 0.000050
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.447127 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.447148 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000055 1 0.000087
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.010575 2 0.000078
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.010785 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.010818 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=0 lpr=87 pi=[67,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000185 1 0.000238
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.447413 2 0.000035
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.447606 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.447631 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=87) [0] r=0 lpr=87 pi=[69,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000092 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000120 1 0.000246
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.012179 3 0.000044
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.012400 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.012418 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=0 lpr=87 pi=[68,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000164 1 0.000336
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000003 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 88 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 88 handle_osd_map epochs [88,88], i have 88, src has [1,88]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 88 heartbeat osd_stat(store_statfs(0x4fca93000/0x0/0x4ffc00000, data 0x105eb3/0x188000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79388672 unmapped: 4456448 heap: 83845120 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.000920 6 0.000041
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.002443 6 0.000229
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.004712 6 0.000059
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.003890 6 0.000034
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=69/69 les/c/f=70/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 40'355 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002905 3 0.000107
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 40'355 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 40'355 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000026 1 0.000045
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 lc 40'355 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 40'443 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002774 3 0.000063
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 40'443 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.049967 1 0.000018
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 40'221 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.052357 3 0.000054
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 40'221 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 40'443 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.048178 1 0.000039
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 lc 40'443 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035778 1 0.000073
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 40'221 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.035942 1 0.000018
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 40'278 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.086813 3 0.000096
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 40'278 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 lc 40'221 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.036259 1 0.000142
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 40'278 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.036350 1 0.000175
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 lc 40'278 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.056873 1 0.000068
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 89 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fca8f000/0x0/0x4ffc00000, data 0x107f1d/0x18b000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1b deep-scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1b deep-scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 79519744 unmapped: 5373952 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807366 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.923776 1 0.000040
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.010630 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.015371 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000055 1 0.000088
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.887557 1 0.000060
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.012293 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.014908 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[67,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000100 1 0.000165
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000039 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.830846 1 0.000045
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.011018 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.014946 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=88) [0]/[1] r=-1 lpr=88 pi=[69,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000060 1 0.000106
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.960638 1 0.000055
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.013622 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.014566 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] r=-1 lpr=88 pi=[68,88)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000031 1 0.000049
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000003 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001920 2 0.001333
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001867 2 0.000026
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001597 2 0.000021
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=34
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=34
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000651 2 0.000046
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 90 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=46
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=46
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000385 2 0.000088
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002741 2 0.000228
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=41
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=41
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001621 2 0.000057
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000975 2 0.000082
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 90 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0x10a0e3/0x192000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 4325376 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 90 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003970 2 0.000087
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007258 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004129 2 0.000073
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007940 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006884 2 0.000131
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009570 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007430 2 0.000055
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.009730 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=90/68 les/c/f=91/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002309 4 0.000136
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=90/68 les/c/f=91/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=90/68 les/c/f=91/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=7 ec=53/34 lis/c=90/68 les/c/f=91/69/0 sis=90) [0] r=0 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/67 les/c/f=91/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002230 4 0.000066
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/67 les/c/f=91/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/67 les/c/f=91/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000147 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/67 les/c/f=91/68/0 sis=90) [0] r=0 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001205 3 0.000120
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000068 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=5 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=88/69 les/c/f=89/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.000989 3 0.000066
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 91 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=90/91 n=6 ec=53/34 lis/c=90/69 les/c/f=91/70/0 sis=90) [0] r=0 lpr=90 pi=[69,90)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 heartbeat osd_stat(store_statfs(0x4fca81000/0x0/0x4ffc00000, data 0x10e083/0x198000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80568320 unmapped: 4325376 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 heartbeat osd_stat(store_statfs(0x4fca81000/0x0/0x4ffc00000, data 0x10e083/0x198000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80617472 unmapped: 4276224 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 4268032 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.965123177s of 11.030270576s, submitted: 91
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80625664 unmapped: 4268032 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817441 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 ms_handle_reset con 0x560c9a066000 session 0x560c9b906960
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 ms_handle_reset con 0x560c9c8a3000 session 0x560c9cb92b40
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e(unlocked)] enter Initial
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=0 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000056 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=0 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000029
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000127 1 0.000053
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001114 2 0.000037
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 92 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 92 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0x10e083/0x198000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.a deep-scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.a deep-scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80683008 unmapped: 4210688 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 92 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.604680 2 0.000060
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.605975 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=67/68 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.000920 3 0.000133
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000059 1 0.000053
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 lc 35'10 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007495 3 0.000054
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 93 pg[6.e( v 41'42 (0'0,41'42] local-lis/les=92/93 n=1 ec=49/14 lis/c=92/67 les/c/f=93/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80715776 unmapped: 4177920 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=41'42 mlcod 41'42 active+clean] exit Started/Primary/Active/Clean 42.182703 99 0.000380
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary/Active 42.470143 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started/Primary 43.469295 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=41'42 mlcod 41'42 active mbc={255={}}] exit Started 43.469476 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=61) [0] r=0 lpr=61 crt=41'42 mlcod 41'42 active mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532474518s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 41'42 active pruub 256.462554932s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] exit Reset 0.000068 1 0.000110
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94 pruub=13.532430649s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 256.462554932s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 24.176490 56 0.000205
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 24.177861 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 25.182837 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 25.182919 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823891640s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 active pruub 258.754333496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] exit Reset 0.000087 1 0.000142
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823860168s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 24.176838 56 0.000229
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 24.177740 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 25.182406 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 25.182635 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=75) [0] r=0 lpr=75 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823626518s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 active pruub 258.754333496s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] exit Reset 0.000034 1 0.000162
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 94 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94 pruub=15.823608398s) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 258.754333496s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 94 handle_osd_map epochs [93,94], i have 94, src has [1,94]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 94 heartbeat osd_stat(store_statfs(0x4fca7c000/0x0/0x4ffc00000, data 0x1122ac/0x19e000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80732160 unmapped: 4161536 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 94 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012433 3 0.000024
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013088 6 0.000060
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 crt=41'42 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.012624 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000101 1 0.000301
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000057 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000031 1 0.000229
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.013078 3 0.000034
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.013265 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=-1 lpr=94 pi=[75,94)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000063 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000127 1 0.000314
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000026 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000122 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000394
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.127902 3 0.000070
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ReplicaActive 0.127937 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000044 1 0.000049
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 DELETING pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.024065 2 0.000119
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started/ToDelete 0.024180 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 95 pg[6.f( v 41'42 (0'0,41'42] lb MIN local-lis/les=61/62 n=3 ec=49/14 lis/c=61/61 les/c/f=62/62/0 sis=94) [1] r=-1 lpr=94 pi=[61,94)/1 luod=0'0 crt=41'42 mlcod 0'0 active mbc={}] exit Started 1.165278 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 4030464 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 95 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003209 4 0.000309
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003660 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003036 4 0.000106
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003718 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=75/76 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002752 5 0.001184
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000140 1 0.000035
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.002922 5 0.001264
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000546 1 0.000085
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035475 2 0.000068
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.035972 1 0.000061
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000411 1 0.000084
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052313 2 0.000074
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80887808 unmapped: 4005888 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835463 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.918601 1 0.000162
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010735 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.014492 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.014731 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991911888s) [2] async=[2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 40'1059 active pruub 260.950622559s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] exit Reset 0.000221 1 0.000192
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.972112 1 0.000139
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] exit Start 0.000012 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.011702 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.015622 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.015817 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991852760s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950622559s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] async=[2] r=0 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991250038s) [2] async=[2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 40'1059 active pruub 260.950653076s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] exit Reset 0.000146 1 0.000333
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] exit Start 0.000048 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97 pruub=14.991156578s) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 260.950653076s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 3989504 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009114 7 0.000607
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000059 1 0.000099
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009441 7 0.000567
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000045 1 0.000065
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 DELETING pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.060747 2 0.000135
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.060862 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.070389 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 DELETING pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097205 2 0.000120
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.097316 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=-1 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.106874 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 3948544 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fca72000/0x0/0x4ffc00000, data 0x11c141/0x1a9000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 3948544 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 98 heartbeat osd_stat(store_statfs(0x4fca72000/0x0/0x4ffc00000, data 0x11c141/0x1a9000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 4030464 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80863232 unmapped: 4030464 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824002 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.433236122s of 10.488536835s, submitted: 65
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=53) [0] r=0 lpr=53 crt=40'1059 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 61.435893 137 0.001785
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=53) [0] r=0 lpr=53 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 61.440356 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=53) [0] r=0 lpr=53 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 62.443238 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=53) [0] r=0 lpr=53 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 62.443279 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=53) [0] r=0 lpr=53 crt=40'1059 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564367294s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 261.555847168s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] exit Reset 0.000320 1 0.000596
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] exit Start 0.000127 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 99 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99 pruub=10.564089775s) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.555847168s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80871424 unmapped: 4022272 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023387 3 0.000577
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.023914 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=-1 lpr=99 pi=[53,99)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000043 1 0.000068
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000032
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000019 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 100 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 3997696 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012636 4 0.000046
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.012719 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=53/55 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 101 handle_osd_map epochs [100,101], i have 101, src has [1,101]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80912384 unmapped: 3981312 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fca69000/0x0/0x4ffc00000, data 0x12230e/0x1b2000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.612325 5 0.000236
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000090 1 0.000088
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000535 1 0.000054
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.014216 2 0.000036
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.378852 1 0.000045
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006177 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.018921 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.018951 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[53,100)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605948448s) [2] async=[2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 active pruub 269.640716553s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] exit Reset 0.000212 1 0.000290
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] enter Started
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] enter Start
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] exit Start 0.000104 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102 pruub=15.605783463s) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 269.640716553s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 3964928 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 102 heartbeat osd_stat(store_statfs(0x4fca65000/0x0/0x4ffc00000, data 0x1242c5/0x1b5000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.338445 6 0.000245
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001112 2 0.000055
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 DELETING pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029484 2 0.000098
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.030635 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=-1 lpr=102 pi=[53,102)/1 crt=40'1059 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.369253 0 0.000000
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 3956736 heap: 84893696 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844352 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fca65000/0x0/0x4ffc00000, data 0x1242c5/0x1b5000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 5005312 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 4997120 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 4980736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fca58000/0x0/0x4ffc00000, data 0x12c483/0x1c1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 4972544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 4964352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 863956 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.360844612s of 10.417983055s, submitted: 97
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 4956160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 108 heartbeat osd_stat(store_statfs(0x4fca53000/0x0/0x4ffc00000, data 0x130581/0x1c7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 108 handle_osd_map epochs [109,110], i have 108, src has [1,110]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 108 handle_osd_map epochs [109,110], i have 110, src has [1,110]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 110 heartbeat osd_stat(store_statfs(0x4fca53000/0x0/0x4ffc00000, data 0x130581/0x1c7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.f deep-scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.f deep-scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 4882432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 4882432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fca4a000/0x0/0x4ffc00000, data 0x13649d/0x1d0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 4874240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fca4a000/0x0/0x4ffc00000, data 0x13649d/0x1d0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 ms_handle_reset con 0x560c9c8a0800 session 0x560c9b5625a0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 ms_handle_reset con 0x560c9c8a0000 session 0x560c9dae41e0
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fca4a000/0x0/0x4ffc00000, data 0x13649d/0x1d0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4825088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876276 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4825088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct  9 10:05:05 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 111 handle_osd_map epochs [112,113], i have 111, src has [1,113]
Oct  9 10:05:05 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 4775936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:05 compute-1 ceph-osd[7514]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fca4c000/0x0/0x4ffc00000, data 0x13649d/0x1d0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 4775936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 113 heartbeat osd_stat(store_statfs(0x4fca45000/0x0/0x4ffc00000, data 0x13a675/0x1d6000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x2fdf9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 4759552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 4751360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 887737 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.160198212s of 10.195916176s, submitted: 38
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 4743168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 4734976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 114 handle_osd_map epochs [115,116], i have 114, src has [1,116]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [0] r=0 lpr=79 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 51.111732 107 0.000551
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [0] r=0 lpr=79 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 51.112677 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [0] r=0 lpr=79 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 52.116691 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [0] r=0 lpr=79 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 52.116835 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=79) [0] r=0 lpr=79 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888633728s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 active pruub 286.476196289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] exit Reset 0.000051 1 0.000093
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 116 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116 pruub=12.888607979s) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 286.476196289s@ mbc={}] enter Started/Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 116 handle_osd_map epochs [115,116], i have 116, src has [1,116]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4718592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 116 heartbeat osd_stat(store_statfs(0x4fc632000/0x0/0x4ffc00000, data 0x13c761/0x1d9000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.885412 3 0.000032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.885452 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=116) [1] r=-1 lpr=116 pi=[79,116)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000058 1 0.000091
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001993 2 0.000042
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 117 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 4677632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 117 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003893 3 0.000109
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.006004 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=79/80 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=79/79 les/c/f=80/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.001670 5 0.000246
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000093 1 0.000076
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000426 1 0.000032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049484 2 0.000092
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 118 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 4661248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908847 data_alloc: 218103808 data_used: 286720
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.952902 1 0.000067
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004903 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.010952 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.010983 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=117) [1]/[0] async=[1] r=0 lpr=117 pi=[79,117)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996621132s) [1] async=[1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 40'1059 active pruub 291.480834961s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] exit Reset 0.000164 1 0.000388
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] exit Start 0.000043 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 119 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119 pruub=14.996500015s) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 291.480834961s@ mbc={}] enter Started/Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 4653056 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fc621000/0x0/0x4ffc00000, data 0x14687f/0x1e8000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.010565 6 0.000260
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000504 2 0.000625
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 DELETING pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053072 2 0.000214
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053887 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 120 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=117/118 n=5 ec=53/34 lis/c=117/79 les/c/f=118/80/0 sis=119) [1] r=-1 lpr=119 pi=[79,119)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.064819 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4603904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4603904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 4595712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 4595712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902006 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 4587520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.481028557s of 10.509338379s, submitted: 33
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fc622000/0x0/0x4ffc00000, data 0x1487b3/0x1ea000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 4579328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 4579328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fc61e000/0x0/0x4ffc00000, data 0x14a89f/0x1ed000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=84) [0] r=0 lpr=84 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 56.544703 110 0.000890
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=84) [0] r=0 lpr=84 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 56.546599 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=84) [0] r=0 lpr=84 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 57.552659 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=84) [0] r=0 lpr=84 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 57.552698 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=84) [0] r=0 lpr=84 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455636024s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 active pruub 299.496215820s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] exit Reset 0.000286 1 0.000376
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] exit Start 0.000124 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 122 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122 pruub=15.455393791s) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 299.496215820s@ mbc={}] enter Started/Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.437780 3 0.000259
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.437976 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000065 1 0.000114
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000049
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 4562944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [123,124], i have 124, src has [1,124]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000506 4 0.000053
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000616 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 4554752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920632 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.929512 5 0.000230
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000052 1 0.000056
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000620 1 0.000022
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.014229 2 0.000090
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.254469 1 0.000142
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.199120 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.199756 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.199779 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730270386s) [1] async=[1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 40'1059 active pruub 302.409027100s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] exit Reset 0.000089 1 0.000138
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Started/Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4521984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 125 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4521984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.861375 6 0.000071
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000940 2 0.000043
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 DELETING pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039841 2 0.000114
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.040842 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.902274 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4513792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4513792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc60f000/0x0/0x4ffc00000, data 0x154aa9/0x1fb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 77.464063 170 0.000532
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 77.465815 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 78.470504 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 78.470528 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538806915s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 active pruub 300.480133057s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] exit Reset 0.000078 1 0.000123
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Started/Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4505600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929125 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.770187 3 0.000226
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.770216 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000058 1 0.000081
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000035
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4497408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 128 handle_osd_map epochs [128,129], i have 129, src has [1,129]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002967 4 0.000048
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003062 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.287339211s of 10.341490746s, submitted: 63
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f(unlocked)] enter Initial
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=0 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=0 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000024
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000106 1 0.000033
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000143 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.917700 5 0.000274
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000080 1 0.000041
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000345 1 0.000023
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035394 2 0.000101
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4497408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.100369 2 0.000045
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.100600 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.064024 1 0.000046
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.017850 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.020940 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.020967 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.100759 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.899759293s) [2] async=[2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 40'1059 active pruub 308.632507324s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000458 1 0.000730
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000102 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] exit Reset 0.004411 1 0.004537
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] exit Start 0.000009 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Started/Stray
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc607000/0x0/0x4ffc00000, data 0x15ac76/0x204000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 4489216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x15cc5f/0x207000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.207355 5 0.000520
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.203806 6 0.000178
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001910 2 0.000149
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003054 4 0.000130
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000064 1 0.000036
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 DELETING pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.042146 2 0.000241
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.044125 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.248037 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.070374 1 0.000064
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.466371 1 0.000036
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.539968 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 1.747725 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000078 1 0.000113
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000469 2 0.000032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=32
Oct  9 10:05:06 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=32
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001068 2 0.000055
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 132 ms_handle_reset con 0x560c9c8a1800 session 0x560c9d630d20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [132,133], i have 133, src has [1,133]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002487 2 0.000117
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004414 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002016 4 0.001053
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952396 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct  9 10:05:06 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4415488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953544 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.116673470s of 13.153404236s, submitted: 45
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953676 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 4374528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4472832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4472832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4464640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954348 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a1000 session 0x560c9d2dc5a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9b7d1800 session 0x560c9d8512c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4464640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9cbe2c00 session 0x560c9c5994a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9cf88000 session 0x560c9d20c960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4456448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4456448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4448256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4448256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954348 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.990660667s of 14.992744446s, submitted: 2
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4415488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954216 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954480 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4366336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4366336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955992 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4358144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4358144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9a066000 session 0x560c9d208d20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4349952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.551061630s of 13.556247711s, submitted: 4
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4349952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955401 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4333568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4333568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 4317184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 4308992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4292608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4292608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 4276224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 4276224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 4268032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 4268032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.443452835s of 26.446563721s, submitted: 3
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 4235264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 4235264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 4227072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 4227072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 4186112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 4186112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4169728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4169728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4161536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4161536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4136960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4136960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9dab7400 session 0x560c9d20f860
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a0800 session 0x560c9cf7a960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4120576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4120576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4112384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4112384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4096000 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.453056335s of 50.454822540s, submitted: 1
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4096000 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4087808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4087808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 4071424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 4071424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 4063232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 4063232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 4046848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 4046848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 4038656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 4038656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.000545502s of 17.003219604s, submitted: 2
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 4014080 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 4014080 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 4005888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 4005888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3997696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3997696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3989504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3989504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 3981312 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3964928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3964928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3948544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3948544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3940352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d2dd0e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a0000 session 0x560c9d20fa40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3940352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3923968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3923968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3915776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3915776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.123451233s of 34.124847412s, submitted: 1
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3907584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3907584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 3891200 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 3883008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 3883008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 3874816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 3874816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3858432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3858432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.923893929s of 14.924749374s, submitted: 1
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3842048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3842048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3833856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3833856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 3817472 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 3809280 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3792896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3792896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 3784704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 3784704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3768320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3768320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3751936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3751936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3743744 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3743744 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 3727360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 3727360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 3702784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 3702784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 3694592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 3694592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 3686400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 3686400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 3653632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 3653632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 3637248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 3637248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 3629056 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 3629056 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 3612672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 3612672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 3604480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 3604480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 3588096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 3571712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 3571712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3563520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3563520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3555328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3555328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 3530752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 3530752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3522560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3522560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3506176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3506176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 3489792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 3489792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3465216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3465216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3424256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3416064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3416064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3391488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3391488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 21.18 MB, 0.04 MB/s#012Interval WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3317760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3309568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3244032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3244032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3235840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 3153920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 3153920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 3080192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 3080192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 3063808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 3063808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 3022848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 3022848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 3014656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 206.934524536s of 206.935745239s, submitted: 1
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [1])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 2695168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [1])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9ade0c00 session 0x560c9b978780
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 2400256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 496.980529785s of 497.169708252s, submitted: 379
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc5f7000/0x0/0x4ffc00000, data 0x164ccd/0x214000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 2400256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc5f7000/0x0/0x4ffc00000, data 0x164ccd/0x214000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1310720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968373 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 136 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b4752c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1294336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 137 ms_handle_reset con 0x560c9dab7400 session 0x560c9c598780
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 16670720 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 16621568 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 16621568 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fb179000/0x0/0x4ffc00000, data 0x15db086/0x1690000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 16588800 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114638 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 16588800 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 9273 writes, 35K keys, 9273 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 9273 writes, 2281 syncs, 4.07 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 860 writes, 1592 keys, 860 commit groups, 1.0 writes per commit group, ingest: 0.67 MB, 0.00 MB/s#012Interval WAL: 860 writes, 406 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1115212 data_alloc: 218103808 data_used: 282624
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 heartbeat osd_stat(store_statfs(0x4fb178000/0x0/0x4ffc00000, data 0x15dd058/0x1693000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d851e00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d81d0e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9cf88000 session 0x560c9d81cd20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 40.565647125s of 40.628654480s, submitted: 75
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d81cb40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86188032 unmapped: 16539648 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d2912c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b907e00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 16531456 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114524 data_alloc: 218103808 data_used: 286720
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86196224 unmapped: 16531456 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9dab7400 session 0x560c9df7c5a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9daae000 session 0x560c9cb93c20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d815a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d5b65a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b9790e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fa989000/0x0/0x4ffc00000, data 0x1dc82a7/0x1e82000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9dab7400 session 0x560c9cecfe00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 heartbeat osd_stat(store_statfs(0x4fa989000/0x0/0x4ffc00000, data 0x1dc82a7/0x1e82000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9daad000 session 0x560c9d291c20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87015424 unmapped: 15712256 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1187648 data_alloc: 218103808 data_used: 286720
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d20f860
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 ms_handle_reset con 0x560c9aa9f800 session 0x560c9b88a1e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87351296 unmapped: 15376384 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 87449600 unmapped: 15278080 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251296 data_alloc: 218103808 data_used: 8495104
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1251296 data_alloc: 218103808 data_used: 8495104
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fa961000/0x0/0x4ffc00000, data 0x1dee289/0x1eaa000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 94830592 unmapped: 7897088 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.162794113s of 18.220115662s, submitted: 57
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102268928 unmapped: 1507328 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1064960 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1064960 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102711296 unmapped: 1064960 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102809600 unmapped: 966656 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102842368 unmapped: 933888 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102842368 unmapped: 933888 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102842368 unmapped: 933888 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fe8000/0x0/0x4ffc00000, data 0x25c8289/0x2684000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102875136 unmapped: 901120 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322664 data_alloc: 218103808 data_used: 9072640
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.730630875s of 18.770584106s, submitted: 77
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9d815680
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101392384 unmapped: 2383872 heap: 103776256 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9b9781e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9b8872c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9cb93a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9df7dc20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9aaf50e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9df7da40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9d20d0e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393552 data_alloc: 218103808 data_used: 9076736
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101343232 unmapped: 13983744 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9a89cb40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9dae4f00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9cd65a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101515264 unmapped: 13811712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8575000/0x0/0x4ffc00000, data 0x303b289/0x30f7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 101515264 unmapped: 13811712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105848832 unmapped: 9478144 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464002 data_alloc: 234881024 data_used: 18825216
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8550000/0x0/0x4ffc00000, data 0x305f299/0x311c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110731264 unmapped: 4595712 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.763611794s of 12.791978836s, submitted: 22
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 4505600 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110821376 unmapped: 4505600 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464474 data_alloc: 234881024 data_used: 18825216
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f854e000/0x0/0x4ffc00000, data 0x3060299/0x311d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 4472832 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110854144 unmapped: 4472832 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 4440064 heap: 115326976 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 2662400 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cb000/0x0/0x4ffc00000, data 0x3be3299/0x3ca0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566700 data_alloc: 234881024 data_used: 19333120
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cb000/0x0/0x4ffc00000, data 0x3be3299/0x3ca0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x458f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566700 data_alloc: 234881024 data_used: 19333120
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 117440512 unmapped: 2129920 heap: 119570432 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.381125450s of 12.448619843s, submitted: 102
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 5226496 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3be3299/0x3ca0000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 5226496 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9d8150e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9d8152c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109666304 unmapped: 10952704 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9cb92000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330151 data_alloc: 218103808 data_used: 9076736
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8bd7000/0x0/0x4ffc00000, data 0x25c9289/0x2685000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8bd7000/0x0/0x4ffc00000, data 0x25c9289/0x2685000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330151 data_alloc: 218103808 data_used: 9076736
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109223936 unmapped: 11395072 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d290780
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab7400 session 0x560c9df7cd20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8bd7000/0x0/0x4ffc00000, data 0x25c9289/0x2685000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.680329323s of 11.851483345s, submitted: 377
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9aaf41e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150904 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150904 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1150904 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103219200 unmapped: 17399808 heap: 120619008 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.428812027s of 15.439086914s, submitted: 20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab0e400 session 0x560c9d645e00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9dae61e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d738b40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d5dc1e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab7400 session 0x560c9ab55a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 24510464 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961e000/0x0/0x4ffc00000, data 0x1b84269/0x1c3e000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 24510464 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198552 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103456768 unmapped: 24510464 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9d739a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d645680
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9cf7a960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9a89dc20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102645760 unmapped: 25321472 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102662144 unmapped: 25305088 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1b84279/0x1c3f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240190 data_alloc: 218103808 data_used: 6066176
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1b84279/0x1c3f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240190 data_alloc: 218103808 data_used: 6066176
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 23945216 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 104030208 unmapped: 23937024 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.976077080s of 13.989899635s, submitted: 12
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f961d000/0x0/0x4ffc00000, data 0x1b84279/0x1c3f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107970560 unmapped: 19996672 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d0a000/0x0/0x4ffc00000, data 0x2497279/0x2552000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 21733376 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106233856 unmapped: 21733376 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312780 data_alloc: 218103808 data_used: 6541312
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312780 data_alloc: 218103808 data_used: 6541312
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312780 data_alloc: 218103808 data_used: 6541312
Oct  9 10:05:06 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3781638129' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1312932 data_alloc: 218103808 data_used: 6545408
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8cfc000/0x0/0x4ffc00000, data 0x24a5279/0x2560000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9b888d20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab7400 session 0x560c9da645a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 106299392 unmapped: 21667840 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.863149643s of 21.914012909s, submitted: 80
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9dae5c20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1160649 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102621184 unmapped: 25346048 heap: 127967232 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.242803574s of 20.254222870s, submitted: 18
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d814960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b88be00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9db541e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58d800 session 0x560c9d209680
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9df7c780
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213863 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213863 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 102760448 unmapped: 28360704 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.318160057s of 10.331529617s, submitted: 11
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9cecfc20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f94a5000/0x0/0x4ffc00000, data 0x1cfd269/0x1db7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103071744 unmapped: 28049408 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217388 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 103317504 unmapped: 27803648 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 25927680 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 25927680 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 25927680 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260232 data_alloc: 218103808 data_used: 6615040
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9481000/0x0/0x4ffc00000, data 0x1d21269/0x1ddb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9481000/0x0/0x4ffc00000, data 0x1d21269/0x1ddb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 105201664 unmapped: 25919488 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260232 data_alloc: 218103808 data_used: 6615040
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.570923805s of 10.576161385s, submitted: 7
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108593152 unmapped: 22528000 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 22650880 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 22650880 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108470272 unmapped: 22650880 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330668 data_alloc: 218103808 data_used: 7471104
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108478464 unmapped: 22642688 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 22634496 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 22634496 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330668 data_alloc: 218103808 data_used: 7471104
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108486656 unmapped: 22634496 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d40000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108494848 unmapped: 22626304 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1330668 data_alloc: 218103808 data_used: 7471104
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d1d05a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d8503c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee000 session 0x560c9b474000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108503040 unmapped: 22618112 heap: 131121152 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d20e1e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.934541702s of 15.981528282s, submitted: 74
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d81cd20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9dac5e00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d2092c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee400 session 0x560c9d5dc780
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee400 session 0x560c9cd65a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109633536 unmapped: 25165824 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2bb6279/0x2c71000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109633536 unmapped: 25165824 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385271 data_alloc: 218103808 data_used: 7471104
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9b978780
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85eb000/0x0/0x4ffc00000, data 0x2bb6279/0x2c71000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d1d0780
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109641728 unmapped: 25157632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9d738f00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d644960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109625344 unmapped: 25174016 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109469696 unmapped: 25329664 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1386529 data_alloc: 218103808 data_used: 7475200
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 21127168 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85ea000/0x0/0x4ffc00000, data 0x2bb6289/0x2c72000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1431673 data_alloc: 234881024 data_used: 14135296
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 21094400 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f85ea000/0x0/0x4ffc00000, data 0x2bb6289/0x2c72000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 21061632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 21061632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 21061632 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.221870422s of 18.247957230s, submitted: 23
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115064832 unmapped: 19734528 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1462623 data_alloc: 234881024 data_used: 14249984
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 19611648 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 19611648 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115187712 unmapped: 19611648 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 19578880 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f81e5000/0x0/0x4ffc00000, data 0x2fba289/0x3076000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 19546112 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466451 data_alloc: 234881024 data_used: 14245888
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 19546112 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 19546112 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d2083c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9da65a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d81d0e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 22200320 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 22200320 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 112599040 unmapped: 22200320 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335504 data_alloc: 218103808 data_used: 7471104
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8d5a000/0x0/0x4ffc00000, data 0x2448269/0x2502000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9dae50e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.919174194s of 10.969374657s, submitted: 59
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c000 session 0x560c9dac54a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d814000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180974 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180974 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1180974 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107634688 unmapped: 27164672 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.410537720s of 18.433294296s, submitted: 33
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9da652c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [1])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d20c3c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107642880 unmapped: 27156480 heap: 134799360 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9da650e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f800 session 0x560c9db17a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d81c780
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d5dd2c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9db57e00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267502 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d7383c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc3000/0x0/0x4ffc00000, data 0x21df269/0x2299000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 35405824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267502 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee400 session 0x560c9d81c3c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9da65860
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9f000 session 0x560c9d2dc960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 35389440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 107290624 unmapped: 35389440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345620 data_alloc: 234881024 data_used: 11042816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111837184 unmapped: 30842880 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8fc2000/0x0/0x4ffc00000, data 0x21df279/0x229a000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111845376 unmapped: 30834688 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345620 data_alloc: 234881024 data_used: 11042816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111853568 unmapped: 30826496 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.394321442s of 18.419612885s, submitted: 19
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111419392 unmapped: 31260672 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 27025408 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 27025408 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 27025408 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416066 data_alloc: 234881024 data_used: 11665408
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 26927104 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 26927104 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 26927104 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416082 data_alloc: 234881024 data_used: 11665408
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416082 data_alloc: 234881024 data_used: 11665408
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 26918912 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 26910720 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 26910720 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 26910720 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f878f000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26902528 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1416082 data_alloc: 234881024 data_used: 11665408
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26902528 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115777536 unmapped: 26902528 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9db57c20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9cd65a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9d5dc780
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9d209680
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.403728485s of 20.452980042s, submitted: 60
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d2092c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac4d20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9da64d20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef400 session 0x560c9a89d680
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db563c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84d7000/0x0/0x4ffc00000, data 0x2cc9289/0x2d85000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1448246 data_alloc: 234881024 data_used: 11665408
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9a89de00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9d81d4a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114589696 unmapped: 28090368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9db572c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9dac52c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114900992 unmapped: 27779072 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116588544 unmapped: 26091520 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471645 data_alloc: 234881024 data_used: 14168064
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84b1000/0x0/0x4ffc00000, data 0x2ced2bc/0x2dab000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84b1000/0x0/0x4ffc00000, data 0x2ced2bc/0x2dab000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116637696 unmapped: 26042368 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 26034176 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1471645 data_alloc: 234881024 data_used: 14168064
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 26034176 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 26034176 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f84b1000/0x0/0x4ffc00000, data 0x2ced2bc/0x2dab000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.588661194s of 15.604912758s, submitted: 23
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119062528 unmapped: 23617536 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 22052864 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1557595 data_alloc: 234881024 data_used: 15020032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119169024 unmapped: 23511040 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79f9000/0x0/0x4ffc00000, data 0x37a52bc/0x3863000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119177216 unmapped: 23502848 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79f9000/0x0/0x4ffc00000, data 0x37a52bc/0x3863000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558307 data_alloc: 234881024 data_used: 15020032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119463936 unmapped: 23216128 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79d8000/0x0/0x4ffc00000, data 0x37c62bc/0x3884000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 23207936 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 23207936 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558307 data_alloc: 234881024 data_used: 15020032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.588848114s of 12.664453506s, submitted: 125
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cd000/0x0/0x4ffc00000, data 0x37d12bc/0x388f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 23166976 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558307 data_alloc: 234881024 data_used: 15020032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558227 data_alloc: 234881024 data_used: 15020032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 23158784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558227 data_alloc: 234881024 data_used: 15020032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119529472 unmapped: 23150592 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 23142400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119537664 unmapped: 23142400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.197729111s of 19.202938080s, submitted: 5
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 23126016 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 23117824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 23117824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 23117824 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 23109632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79ca000/0x0/0x4ffc00000, data 0x37d42bc/0x3892000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1558731 data_alloc: 234881024 data_used: 15020032
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119578624 unmapped: 23101440 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9db554a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cf7b4a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.945161819s of 16.949874878s, submitted: 5
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dc1e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8797000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422534 data_alloc: 234881024 data_used: 11665408
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8797000/0x0/0x4ffc00000, data 0x2a09279/0x2ac4000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9aaf41e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d208b40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 25993216 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d644960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202591 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202591 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202591 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109436928 unmapped: 33243136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.212566376s of 22.240785599s, submitted: 46
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9db56960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d645e00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d20c3c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9d644d20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db17c20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9239000/0x0/0x4ffc00000, data 0x1f69269/0x2023000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279011 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9cf7a3c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d5dc960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac4000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef800 session 0x560c9dac5a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111345664 unmapped: 31334400 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 111329280 unmapped: 31350784 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 29229056 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1337825 data_alloc: 218103808 data_used: 8757248
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9238000/0x0/0x4ffc00000, data 0x1f69279/0x2024000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 29138944 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 29138944 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cf7af00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b8892c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d20d860
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210448 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210448 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 108642304 unmapped: 34037760 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.394531250s of 19.422958374s, submitted: 34
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d630d20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4eec00 session 0x560c9dac5e00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db55a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d1d0d20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d645860
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288735 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dda40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ef000 session 0x560c9d5ddc20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9247000/0x0/0x4ffc00000, data 0x1f5b269/0x2015000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9db16f00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9cf7a000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9247000/0x0/0x4ffc00000, data 0x1f5b269/0x2015000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 33349632 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 28672000 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9a89d4a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9b906b40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1f5b279/0x2016000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 28672000 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356453 data_alloc: 234881024 data_used: 10121216
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4efc00 session 0x560c9d20f860
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219082 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219082 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1219082 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 32219136 heap: 142680064 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cece000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d20fe00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d20e1e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9b88a1e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.325824738s of 24.373125076s, submitted: 57
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9dab3000 session 0x560c9b88b0e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9b88ad20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d814000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9d815e00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9aaf41e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 36405248 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 36405248 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3c00 session 0x560c9d645e00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 36405248 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318042 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9dac4000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x22e2269/0x239c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d5dc960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9b8892c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 36397056 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 36397056 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401186 data_alloc: 234881024 data_used: 12648448
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x22e2269/0x239c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8ec0000/0x0/0x4ffc00000, data 0x22e2269/0x239c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1401186 data_alloc: 234881024 data_used: 12648448
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113221632 unmapped: 33136640 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 113303552 unmapped: 33054720 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.557071686s of 14.579683304s, submitted: 17
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 26443776 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 25993216 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8595000/0x0/0x4ffc00000, data 0x2c04269/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120365056 unmapped: 25993216 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1488802 data_alloc: 234881024 data_used: 13692928
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 25903104 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120455168 unmapped: 25903104 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8595000/0x0/0x4ffc00000, data 0x2c04269/0x2cbe000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 120487936 unmapped: 25870336 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1483322 data_alloc: 234881024 data_used: 13692928
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.212394714s of 11.282471657s, submitted: 85
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f859b000/0x0/0x4ffc00000, data 0x2c07269/0x2cc1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac5e00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9b906960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 119087104 unmapped: 27271168 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9d2090e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f998e000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 36651008 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1232474 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.065891266s of 17.081371307s, submitted: 23
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9d5b7680
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d208000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9b889a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d738f00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9aaf41e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x20262cb/0x20e1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9dac52c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315745 data_alloc: 218103808 data_used: 290816
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 36175872 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 31391744 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x20262cb/0x20e1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1385209 data_alloc: 234881024 data_used: 10616832
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x20262cb/0x20e1000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 114999296 unmapped: 31358976 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.787096977s of 12.822224617s, submitted: 36
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dae7860
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9dac43c0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d2ddc20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d20d4a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9db545a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115589120 unmapped: 30769152 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115589120 unmapped: 30769152 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1429577 data_alloc: 234881024 data_used: 10629120
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9aa9e400 session 0x560c9cece000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121470976 unmapped: 24887296 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a2800 session 0x560c9cecfe00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f7fb7000/0x0/0x4ffc00000, data 0x31ea2cb/0x32a5000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dd0e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9d5dd4a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 23437312 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 23429120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126525440 unmapped: 19832832 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126631936 unmapped: 19726336 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1565980 data_alloc: 234881024 data_used: 16048128
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f7fb6000/0x0/0x4ffc00000, data 0x31ea2db/0x32a6000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1565204 data_alloc: 234881024 data_used: 16052224
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 126640128 unmapped: 19718144 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f7f95000/0x0/0x4ffc00000, data 0x320b2db/0x32c7000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.480135918s of 14.569817543s, submitted: 128
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127311872 unmapped: 19046400 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1616698 data_alloc: 234881024 data_used: 16429056
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 18563072 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79f7000/0x0/0x4ffc00000, data 0x37a12db/0x385d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 18546688 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 18546688 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 18399232 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1612226 data_alloc: 234881024 data_used: 16429056
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79e0000/0x0/0x4ffc00000, data 0x37c02db/0x387c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 18399232 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79dd000/0x0/0x4ffc00000, data 0x37c32db/0x387f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 127967232 unmapped: 18391040 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1612618 data_alloc: 234881024 data_used: 16429056
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79dd000/0x0/0x4ffc00000, data 0x37c32db/0x387f000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.750670433s of 12.807613373s, submitted: 72
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 128081920 unmapped: 18276352 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 128081920 unmapped: 18276352 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 128081920 unmapped: 18276352 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d5dd680
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9d6441e0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f79cf000/0x0/0x4ffc00000, data 0x37d12db/0x388d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9b88b860
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1486899 data_alloc: 234881024 data_used: 10125312
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 124411904 unmapped: 21946368 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b474960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9a89cd20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d20c000
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 115761152 unmapped: 30597120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1253053 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f993a000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.960874557s of 25.003011703s, submitted: 65
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9db57c20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9b888f00
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9c8a3000 session 0x560c9aaf45a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9cecfa40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9dac45a0
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29941760 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29941760 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9b474d20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18ec269/0x19a6000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9cd65a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116416512 unmapped: 29941760 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9e4ee800 session 0x560c9dac5c20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807800 session 0x560c9d1d1a40
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 29630464 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306517 data_alloc: 218103808 data_used: 3018752
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1910279/0x19cb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1910279/0x19cb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306517 data_alloc: 218103808 data_used: 3018752
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9891000/0x0/0x4ffc00000, data 0x1910279/0x19cb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 29573120 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.842787743s of 12.852742195s, submitted: 11
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8e2a000/0x0/0x4ffc00000, data 0x236b279/0x2426000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 123723776 unmapped: 22634496 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390535 data_alloc: 218103808 data_used: 3342336
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8e07000/0x0/0x4ffc00000, data 0x2381279/0x243c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390551 data_alloc: 218103808 data_used: 3342336
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f8e07000/0x0/0x4ffc00000, data 0x2381279/0x243c000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 121266176 unmapped: 25092096 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9a807c00 session 0x560c9d5dcd20
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9ab01000 session 0x560c9ab54780
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.491474152s of 11.555577278s, submitted: 110
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 ms_handle_reset con 0x560c9d58c800 session 0x560c9b888960
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 29425664 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 29417472 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 3773 syncs, 3.36 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3402 writes, 12K keys, 3402 commit groups, 1.0 writes per commit group, ingest: 13.97 MB, 0.02 MB/s#012Interval WAL: 3402 writes, 1492 syncs, 2.28 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 29409280 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261042 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 29392896 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: do_command 'config diff' '{prefix=config diff}'
Oct  9 10:05:06 compute-1 ceph-osd[7514]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  9 10:05:06 compute-1 ceph-osd[7514]: do_command 'config show' '{prefix=config show}'
Oct  9 10:05:06 compute-1 ceph-osd[7514]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  9 10:05:06 compute-1 ceph-osd[7514]: do_command 'counter dump' '{prefix=counter dump}'
Oct  9 10:05:06 compute-1 ceph-osd[7514]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 29810688 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: do_command 'counter schema' '{prefix=counter schema}'
Oct  9 10:05:06 compute-1 ceph-osd[7514]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  9 10:05:06 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 29843456 heap: 146358272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:05:06 compute-1 ceph-osd[7514]: osd.0 141 heartbeat osd_stat(store_statfs(0x4f9bbf000/0x0/0x4ffc00000, data 0x15e3269/0x169d000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x499f9c1), peers [1,2] op hist [])
Oct  9 10:05:06 compute-1 ceph-osd[7514]: do_command 'log dump' '{prefix=log dump}'
Oct  9 10:05:06 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:05:06 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3763114863' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:05:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:06.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:06 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  9 10:05:06 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3798516040' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  9 10:05:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct  9 10:05:07 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1096470766' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  9 10:05:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct  9 10:05:07 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/754514562' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  9 10:05:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:07.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:07 compute-1 nova_compute[162974]: 2025-10-09 10:05:07.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:07 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct  9 10:05:07 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4220218664' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  9 10:05:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct  9 10:05:08 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3105571077' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  9 10:05:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct  9 10:05:08 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/864826343' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  9 10:05:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:08.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:08 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct  9 10:05:08 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3954604759' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  9 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct  9 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023508330' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  9 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct  9 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4228889836' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct  9 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1943906294' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  9 10:05:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:09.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct  9 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/470775445' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  9 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct  9 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2923247886' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  9 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct  9 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3035430115' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  9 10:05:09 compute-1 podman[175674]: 2025-10-09 10:05:09.593357576 +0000 UTC m=+0.098302551 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 10:05:09 compute-1 podman[175676]: 2025-10-09 10:05:09.627257455 +0000 UTC m=+0.126215264 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:05:09 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct  9 10:05:09 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3165839660' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct  9 10:05:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:05:10.043 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:05:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:05:10.044 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:05:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:05:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:05:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 10:05:10 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3566093463' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 10:05:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct  9 10:05:10 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3564578496' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  9 10:05:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct  9 10:05:10 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1794939133' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  9 10:05:10 compute-1 nova_compute[162974]: 2025-10-09 10:05:10.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:10.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct  9 10:05:10 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4152597779' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct  9 10:05:11 compute-1 systemd[1]: Starting Hostname Service...
Oct  9 10:05:11 compute-1 systemd[1]: Started Hostname Service.
Oct  9 10:05:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:11.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct  9 10:05:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2255613207' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  9 10:05:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:05:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103120249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:05:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:05:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1103120249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:05:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct  9 10:05:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4038948101' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct  9 10:05:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct  9 10:05:12 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4186151168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  9 10:05:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:12.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:12 compute-1 nova_compute[162974]: 2025-10-09 10:05:12.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:12 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct  9 10:05:12 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2339899425' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  9 10:05:13 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  9 10:05:13 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  9 10:05:13 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  9 10:05:13 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  9 10:05:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:13.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:13 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct  9 10:05:13 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/590546173' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  9 10:05:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct  9 10:05:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/128357852' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct  9 10:05:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct  9 10:05:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/521439099' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct  9 10:05:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:14.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:14 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct  9 10:05:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1194589566' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct  9 10:05:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  9 10:05:14 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  9 10:05:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct  9 10:05:15 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/704435951' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct  9 10:05:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:15.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:15 compute-1 nova_compute[162974]: 2025-10-09 10:05:15.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:05:15 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5405 writes, 28K keys, 5405 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s#012Cumulative WAL: 5405 writes, 5405 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1510 writes, 7582 keys, 1510 commit groups, 1.0 writes per commit group, ingest: 17.37 MB, 0.03 MB/s#012Interval WAL: 1510 writes, 1510 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    409.5      0.11              0.07        15    0.007       0      0       0.0       0.0#012  L6      1/0   13.47 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    423.3    363.0      0.49              0.27        14    0.035     72K   7333       0.0       0.0#012 Sum      1/0   13.47 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    348.7    371.2      0.60              0.35        29    0.021     72K   7333       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    337.4    344.0      0.22              0.13        10    0.022     30K   2536       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    423.3    363.0      0.49              0.27        14    0.035     72K   7333       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    412.5      0.10              0.07        14    0.007       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      2.0      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.042, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.22 GB write, 0.12 MB/s write, 0.20 GB read, 0.12 MB/s read, 0.6 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55e4b55c29b0#2 capacity: 304.00 MB usage: 17.44 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1144,16.86 MB,5.54594%) FilterBlock(29,216.98 KB,0.0697036%) IndexBlock(29,378.44 KB,0.121568%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 10:05:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct  9 10:05:15 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2862235529' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct  9 10:05:16 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  9 10:05:16 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  9 10:05:16 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  9 10:05:16 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  9 10:05:16 compute-1 kernel: cfg80211: failed to load regulatory.db
Oct  9 10:05:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:16.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:16 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct  9 10:05:16 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3952041900' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct  9 10:05:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:17.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:17 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct  9 10:05:17 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2510247055' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct  9 10:05:17 compute-1 nova_compute[162974]: 2025-10-09 10:05:17.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:18 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct  9 10:05:18 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1206458610' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct  9 10:05:18 compute-1 ovs-appctl[178009]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  9 10:05:18 compute-1 ovs-appctl[178023]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  9 10:05:18 compute-1 ovs-appctl[178031]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  9 10:05:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:18.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct  9 10:05:19 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2325890707' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct  9 10:05:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:19.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:19 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct  9 10:05:19 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1843500510' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct  9 10:05:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 10:05:20 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2853944145' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 10:05:20 compute-1 nova_compute[162974]: 2025-10-09 10:05:20.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:20.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Oct  9 10:05:20 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1420165825' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct  9 10:05:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct  9 10:05:20 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3769982294' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:21.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:21 compute-1 podman[179214]: 2025-10-09 10:05:21.504790792 +0000 UTC m=+0.131492051 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct  9 10:05:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct  9 10:05:21 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2700875887' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  9 10:05:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct  9 10:05:21 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1640101779' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  9 10:05:21 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct  9 10:05:21 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2969326191' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct  9 10:05:22 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct  9 10:05:22 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/757521440' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:22 compute-1 nova_compute[162974]: 2025-10-09 10:05:22.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:23 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct  9 10:05:23 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3047035409' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct  9 10:05:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:23.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:23 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct  9 10:05:23 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2416067552' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:24 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Oct  9 10:05:24 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1831731513' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct  9 10:05:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:24.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct  9 10:05:25 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3732170856' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:25.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct  9 10:05:25 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3379022171' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-1 nova_compute[162974]: 2025-10-09 10:05:25.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:26 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Oct  9 10:05:26 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3687400171' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  9 10:05:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:26.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:26 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct  9 10:05:26 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1096908375' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct  9 10:05:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:27.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:27 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  9 10:05:27 compute-1 systemd[1]: Starting Time & Date Service...
Oct  9 10:05:27 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 10:05:27 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/248433905' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 10:05:27 compute-1 systemd[1]: Started Time & Date Service.
Oct  9 10:05:27 compute-1 nova_compute[162974]: 2025-10-09 10:05:27.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:27 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct  9 10:05:27 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2772078955' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct  9 10:05:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:28.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:29.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 10:05:30 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2865180231' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 10:05:30 compute-1 nova_compute[162974]: 2025-10-09 10:05:30.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:30 compute-1 podman[180388]: 2025-10-09 10:05:30.52822809 +0000 UTC m=+0.037643077 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  9 10:05:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:30.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:32.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:32 compute-1 nova_compute[162974]: 2025-10-09 10:05:32.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:33.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:34.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:35.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:35 compute-1 nova_compute[162974]: 2025-10-09 10:05:35.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:36.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:37.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:37 compute-1 nova_compute[162974]: 2025-10-09 10:05:37.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:38.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:39.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:40 compute-1 nova_compute[162974]: 2025-10-09 10:05:40.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:40 compute-1 podman[180442]: 2025-10-09 10:05:40.540233377 +0000 UTC m=+0.047820943 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  9 10:05:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:40.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:40 compute-1 podman[180441]: 2025-10-09 10:05:40.56226963 +0000 UTC m=+0.071342987 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  9 10:05:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:41.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:42.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:42 compute-1 nova_compute[162974]: 2025-10-09 10:05:42.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:43.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:43 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  9 10:05:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:44.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:45.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:45 compute-1 nova_compute[162974]: 2025-10-09 10:05:45.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:46.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:47.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:47 compute-1 nova_compute[162974]: 2025-10-09 10:05:47.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:47 compute-1 podman[180641]: 2025-10-09 10:05:47.92501189 +0000 UTC m=+0.025903314 container create be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:05:47 compute-1 systemd[1]: Started libpod-conmon-be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6.scope.
Oct  9 10:05:47 compute-1 systemd[1]: Started libcrun container.
Oct  9 10:05:47 compute-1 podman[180641]: 2025-10-09 10:05:47.979532112 +0000 UTC m=+0.080423526 container init be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  9 10:05:47 compute-1 podman[180641]: 2025-10-09 10:05:47.984671501 +0000 UTC m=+0.085562915 container start be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 10:05:47 compute-1 podman[180641]: 2025-10-09 10:05:47.987368105 +0000 UTC m=+0.088259539 container attach be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:05:47 compute-1 clever_mccarthy[180654]: 167 167
Oct  9 10:05:47 compute-1 systemd[1]: libpod-be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6.scope: Deactivated successfully.
Oct  9 10:05:47 compute-1 podman[180641]: 2025-10-09 10:05:47.99009752 +0000 UTC m=+0.090988934 container died be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:05:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-6d256ef6798a82363a63098da41d9e6ae889f9a3c6e2f13c3f6c106729ec0ffc-merged.mount: Deactivated successfully.
Oct  9 10:05:48 compute-1 podman[180641]: 2025-10-09 10:05:47.914479705 +0000 UTC m=+0.015371139 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:05:48 compute-1 podman[180641]: 2025-10-09 10:05:48.014218454 +0000 UTC m=+0.115109868 container remove be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_mccarthy, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:05:48 compute-1 systemd[1]: libpod-conmon-be6546a86219fd9d50677c4dfe9f171eb1dba8ee2da7326c6dc6d143a62ffce6.scope: Deactivated successfully.
Oct  9 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.139312431 +0000 UTC m=+0.028153186 container create dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 10:05:48 compute-1 systemd[1]: Started libpod-conmon-dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0.scope.
Oct  9 10:05:48 compute-1 systemd[1]: Started libcrun container.
Oct  9 10:05:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9434ee2b923c49d8e33bdbe068f53ca52b2d6ba6f4a4866ee42307babef6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9434ee2b923c49d8e33bdbe068f53ca52b2d6ba6f4a4866ee42307babef6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9434ee2b923c49d8e33bdbe068f53ca52b2d6ba6f4a4866ee42307babef6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d9434ee2b923c49d8e33bdbe068f53ca52b2d6ba6f4a4866ee42307babef6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.208910679 +0000 UTC m=+0.097751453 container init dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.214768903 +0000 UTC m=+0.103609657 container start dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.216126953 +0000 UTC m=+0.104967707 container attach dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.127995568 +0000 UTC m=+0.016836341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:05:48 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:48 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:48.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]: [
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:    {
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        "available": false,
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        "being_replaced": false,
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        "ceph_device_lvm": false,
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        "lsm_data": {},
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        "lvs": [],
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        "path": "/dev/sr0",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        "rejected_reasons": [
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "Insufficient space (<5GB)",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "Has a FileSystem"
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        ],
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        "sys_api": {
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "actuators": null,
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "device_nodes": [
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:                "sr0"
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            ],
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "devname": "sr0",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "human_readable_size": "474.00 KB",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "id_bus": "ata",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "model": "QEMU DVD-ROM",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "nr_requests": "64",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "parent": "/dev/sr0",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "partitions": {},
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "path": "/dev/sr0",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "removable": "1",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "rev": "2.5+",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "ro": "0",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "rotational": "0",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "sas_address": "",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "sas_device_handle": "",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "scheduler_mode": "mq-deadline",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "sectors": 0,
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "sectorsize": "2048",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "size": 485376.0,
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "support_discard": "2048",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "type": "disk",
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:            "vendor": "QEMU"
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:        }
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]:    }
Oct  9 10:05:48 compute-1 naughty_lehmann[180690]: ]
Oct  9 10:05:48 compute-1 systemd[1]: libpod-dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0.scope: Deactivated successfully.
Oct  9 10:05:48 compute-1 conmon[180690]: conmon dd710ecd726b5f0b076e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0.scope/container/memory.events
Oct  9 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.770425402 +0000 UTC m=+0.659266155 container died dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 10:05:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-64d9434ee2b923c49d8e33bdbe068f53ca52b2d6ba6f4a4866ee42307babef6c-merged.mount: Deactivated successfully.
Oct  9 10:05:48 compute-1 podman[180676]: 2025-10-09 10:05:48.794918186 +0000 UTC m=+0.683758940 container remove dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:05:48 compute-1 systemd[1]: libpod-conmon-dd710ecd726b5f0b076e3c9cbf5cff026a381b84019bbadcfbaee0af9b25d1e0.scope: Deactivated successfully.
Oct  9 10:05:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:49.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:05:50 compute-1 nova_compute[162974]: 2025-10-09 10:05:50.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:50.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:51.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:51 compute-1 podman[182067]: 2025-10-09 10:05:51.846298587 +0000 UTC m=+0.061780300 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 10:05:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:52.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:52 compute-1 nova_compute[162974]: 2025-10-09 10:05:52.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:53.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:53 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:54.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:55 compute-1 nova_compute[162974]: 2025-10-09 10:05:55.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:55 compute-1 nova_compute[162974]: 2025-10-09 10:05:55.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  9 10:05:55 compute-1 nova_compute[162974]: 2025-10-09 10:05:55.136 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  9 10:05:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:55.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:55 compute-1 nova_compute[162974]: 2025-10-09 10:05:55.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:56 compute-1 nova_compute[162974]: 2025-10-09 10:05:56.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:56 compute-1 nova_compute[162974]: 2025-10-09 10:05:56.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  9 10:05:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:56.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.146 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.147 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.147 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.147 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.147 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:05:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:05:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1189174758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.521 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.373s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:05:57 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 10:05:57 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.721 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.722 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4796MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.722 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.723 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.864 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.865 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.953 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing inventories for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.968 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Updating ProviderTree inventory for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.968 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Updating inventory in ProviderTree for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.980 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing aggregate associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  9 10:05:57 compute-1 nova_compute[162974]: 2025-10-09 10:05:57.996 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Refreshing trait associations for resource provider 79aa81b0-5a5d-4643-a355-ec5461cb321a, traits: HW_CPU_X86_AESNI,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_BMI,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE4A,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,HW_CPU_X86_FMA3,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_CLMUL,HW_CPU_X86_AVX512VAES,COMPUTE_ACCELERATORS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000E _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  9 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.007 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:05:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:05:58 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1898786243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.343 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.348 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.359 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.364 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:05:58 compute-1 nova_compute[162974]: 2025-10-09 10:05:58.365 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:05:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:58.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:59 compute-1 nova_compute[162974]: 2025-10-09 10:05:59.355 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:59 compute-1 nova_compute[162974]: 2025-10-09 10:05:59.356 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:59 compute-1 nova_compute[162974]: 2025-10-09 10:05:59.356 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:59 compute-1 nova_compute[162974]: 2025-10-09 10:05:59.357 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:59 compute-1 nova_compute[162974]: 2025-10-09 10:05:59.357 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:05:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:05:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:59.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:00 compute-1 nova_compute[162974]: 2025-10-09 10:06:00.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:00 compute-1 nova_compute[162974]: 2025-10-09 10:06:00.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:06:00 compute-1 nova_compute[162974]: 2025-10-09 10:06:00.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:06:00 compute-1 nova_compute[162974]: 2025-10-09 10:06:00.127 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:06:00 compute-1 nova_compute[162974]: 2025-10-09 10:06:00.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:00.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:01 compute-1 podman[182193]: 2025-10-09 10:06:01.509963328 +0000 UTC m=+0.038193624 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:06:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:02.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:02 compute-1 nova_compute[162974]: 2025-10-09 10:06:02.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:03 compute-1 nova_compute[162974]: 2025-10-09 10:06:03.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:03.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:04 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Oct  9 10:06:04 compute-1 systemd[1]: session-40.scope: Consumed 2min 932ms CPU time, 728.4M memory peak, read 272.2M from disk, written 208.9M to disk.
Oct  9 10:06:04 compute-1 systemd-logind[798]: Session 40 logged out. Waiting for processes to exit.
Oct  9 10:06:04 compute-1 systemd-logind[798]: Removed session 40.
Oct  9 10:06:04 compute-1 nova_compute[162974]: 2025-10-09 10:06:04.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:04 compute-1 systemd-logind[798]: New session 42 of user zuul.
Oct  9 10:06:04 compute-1 systemd[1]: Started Session 42 of User zuul.
Oct  9 10:06:04 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Oct  9 10:06:04 compute-1 systemd-logind[798]: Session 42 logged out. Waiting for processes to exit.
Oct  9 10:06:04 compute-1 systemd-logind[798]: Removed session 42.
Oct  9 10:06:04 compute-1 systemd-logind[798]: New session 43 of user zuul.
Oct  9 10:06:04 compute-1 systemd[1]: Started Session 43 of User zuul.
Oct  9 10:06:04 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Oct  9 10:06:04 compute-1 systemd-logind[798]: Session 43 logged out. Waiting for processes to exit.
Oct  9 10:06:04 compute-1 systemd-logind[798]: Removed session 43.
Oct  9 10:06:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:04.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:05.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:05 compute-1 nova_compute[162974]: 2025-10-09 10:06:05.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:06.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:07 compute-1 nova_compute[162974]: 2025-10-09 10:06:07.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:08.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:09.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:06:10.044 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:06:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:06:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:06:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:06:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:06:10 compute-1 nova_compute[162974]: 2025-10-09 10:06:10.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:10.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:11.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:11 compute-1 podman[182273]: 2025-10-09 10:06:11.535196863 +0000 UTC m=+0.041850529 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  9 10:06:11 compute-1 podman[182274]: 2025-10-09 10:06:11.538422573 +0000 UTC m=+0.044890659 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Oct  9 10:06:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:12.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:12 compute-1 nova_compute[162974]: 2025-10-09 10:06:12.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:13.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:14.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:14 compute-1 systemd[1]: Stopping User Manager for UID 1000...
Oct  9 10:06:14 compute-1 systemd[173355]: Activating special unit Exit the Session...
Oct  9 10:06:14 compute-1 systemd[173355]: Stopped target Main User Target.
Oct  9 10:06:14 compute-1 systemd[173355]: Stopped target Basic System.
Oct  9 10:06:14 compute-1 systemd[173355]: Stopped target Paths.
Oct  9 10:06:14 compute-1 systemd[173355]: Stopped target Sockets.
Oct  9 10:06:14 compute-1 systemd[173355]: Stopped target Timers.
Oct  9 10:06:14 compute-1 systemd[173355]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:06:14 compute-1 systemd[173355]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 10:06:14 compute-1 systemd[173355]: Closed D-Bus User Message Bus Socket.
Oct  9 10:06:14 compute-1 systemd[173355]: Stopped Create User's Volatile Files and Directories.
Oct  9 10:06:14 compute-1 systemd[173355]: Removed slice User Application Slice.
Oct  9 10:06:14 compute-1 systemd[173355]: Reached target Shutdown.
Oct  9 10:06:14 compute-1 systemd[173355]: Finished Exit the Session.
Oct  9 10:06:14 compute-1 systemd[173355]: Reached target Exit the Session.
Oct  9 10:06:14 compute-1 systemd[1]: user@1000.service: Deactivated successfully.
Oct  9 10:06:14 compute-1 systemd[1]: Stopped User Manager for UID 1000.
Oct  9 10:06:14 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct  9 10:06:14 compute-1 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct  9 10:06:14 compute-1 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct  9 10:06:14 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct  9 10:06:14 compute-1 systemd[1]: Removed slice User Slice of UID 1000.
Oct  9 10:06:14 compute-1 systemd[1]: user-1000.slice: Consumed 2min 1.291s CPU time, 734.1M memory peak, read 272.2M from disk, written 208.9M to disk.
Oct  9 10:06:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:15.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:15 compute-1 nova_compute[162974]: 2025-10-09 10:06:15.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  9 10:06:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  9 10:06:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  9 10:06:15 compute-1 radosgw[13231]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct  9 10:06:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:16.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:17.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:17 compute-1 nova_compute[162974]: 2025-10-09 10:06:17.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:18.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:19.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:20 compute-1 nova_compute[162974]: 2025-10-09 10:06:20.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:20.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:21.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:22 compute-1 podman[182336]: 2025-10-09 10:06:22.546248561 +0000 UTC m=+0.057499560 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 10:06:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:22.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:22 compute-1 nova_compute[162974]: 2025-10-09 10:06:22.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:23.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:24.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:25.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:25 compute-1 nova_compute[162974]: 2025-10-09 10:06:25.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:26.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:27.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:27 compute-1 nova_compute[162974]: 2025-10-09 10:06:27.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:28.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:29.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:30 compute-1 nova_compute[162974]: 2025-10-09 10:06:30.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:30.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:32 compute-1 podman[182364]: 2025-10-09 10:06:32.554673917 +0000 UTC m=+0.067274196 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  9 10:06:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:32.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:32 compute-1 nova_compute[162974]: 2025-10-09 10:06:32.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:33.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:34.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:35.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:35 compute-1 nova_compute[162974]: 2025-10-09 10:06:35.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:36.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:37.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:37 compute-1 nova_compute[162974]: 2025-10-09 10:06:37.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:38.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:39.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:40 compute-1 nova_compute[162974]: 2025-10-09 10:06:40.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:40.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:41.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:42 compute-1 podman[182411]: 2025-10-09 10:06:42.535186446 +0000 UTC m=+0.042874185 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 10:06:42 compute-1 podman[182412]: 2025-10-09 10:06:42.53555009 +0000 UTC m=+0.042071371 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:06:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:42.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:42 compute-1 nova_compute[162974]: 2025-10-09 10:06:42.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:43.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:44.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:45.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:45 compute-1 nova_compute[162974]: 2025-10-09 10:06:45.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:46.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:47.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:47 compute-1 nova_compute[162974]: 2025-10-09 10:06:47.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:48.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:49.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:50 compute-1 nova_compute[162974]: 2025-10-09 10:06:50.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:50.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:51.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:52.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:52 compute-1 nova_compute[162974]: 2025-10-09 10:06:52.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:53 compute-1 podman[182473]: 2025-10-09 10:06:53.050225299 +0000 UTC m=+0.060120028 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:06:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:53.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:54.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:55.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:55 compute-1 nova_compute[162974]: 2025-10-09 10:06:55.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:06:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:56.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.139 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.140 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:06:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:57.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:57 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:06:57 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/703560270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.485 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.679 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.680 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4960MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.680 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.681 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.780 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.780 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.792 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:06:57 compute-1 nova_compute[162974]: 2025-10-09 10:06:57.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:58 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:06:58 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1093705681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:06:58 compute-1 nova_compute[162974]: 2025-10-09 10:06:58.126 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:06:58 compute-1 nova_compute[162974]: 2025-10-09 10:06:58.129 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:06:58 compute-1 nova_compute[162974]: 2025-10-09 10:06:58.144 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:06:58 compute-1 nova_compute[162974]: 2025-10-09 10:06:58.146 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:06:58 compute-1 nova_compute[162974]: 2025-10-09 10:06:58.146 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:06:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:58.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:59 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:06:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:59.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.138 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.151 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:07:00 compute-1 nova_compute[162974]: 2025-10-09 10:07:00.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:00.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:01 compute-1 nova_compute[162974]: 2025-10-09 10:07:01.123 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:01.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.507222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421507253, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2311, "num_deletes": 259, "total_data_size": 5769178, "memory_usage": 5868800, "flush_reason": "Manual Compaction"}
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421515649, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 3653044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28293, "largest_seqno": 30599, "table_properties": {"data_size": 3642593, "index_size": 6305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 26278, "raw_average_key_size": 21, "raw_value_size": 3620093, "raw_average_value_size": 3006, "num_data_blocks": 273, "num_entries": 1204, "num_filter_entries": 1204, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004276, "oldest_key_time": 1760004276, "file_creation_time": 1760004421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 8452 microseconds, and 6895 cpu microseconds.
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.515673) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 3653044 bytes OK
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.515710) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.516041) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.516051) EVENT_LOG_v1 {"time_micros": 1760004421516048, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.516062) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 5757734, prev total WAL file size 5757734, number of live WAL files 2.
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.517142) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(3567KB)], [54(13MB)]
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421517181, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17780134, "oldest_snapshot_seqno": -1}
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6469 keys, 17623957 bytes, temperature: kUnknown
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421555199, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17623957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17577246, "index_size": 29449, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164696, "raw_average_key_size": 25, "raw_value_size": 17457123, "raw_average_value_size": 2698, "num_data_blocks": 1206, "num_entries": 6469, "num_filter_entries": 6469, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.555414) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17623957 bytes
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.563275) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 468.4 rd, 464.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 13.5 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(9.7) write-amplify(4.8) OK, records in: 7005, records dropped: 536 output_compression: NoCompression
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.563291) EVENT_LOG_v1 {"time_micros": 1760004421563283, "job": 32, "event": "compaction_finished", "compaction_time_micros": 37958, "compaction_time_cpu_micros": 24532, "output_level": 6, "num_output_files": 1, "total_output_size": 17623957, "num_input_records": 7005, "num_output_records": 6469, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421564190, "job": 32, "event": "table_file_deletion", "file_number": 56}
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421566164, "job": 32, "event": "table_file_deletion", "file_number": 54}
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.517106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:02 compute-1 nova_compute[162974]: 2025-10-09 10:07:02.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:02 compute-1 nova_compute[162974]: 2025-10-09 10:07:02.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:07:02 compute-1 nova_compute[162974]: 2025-10-09 10:07:02.115 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:07:02 compute-1 nova_compute[162974]: 2025-10-09 10:07:02.126 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:07:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:02.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:02 compute-1 nova_compute[162974]: 2025-10-09 10:07:02.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:03.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:03 compute-1 podman[182786]: 2025-10-09 10:07:03.535156632 +0000 UTC m=+0.041016042 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid)
Oct  9 10:07:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:04.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:05 compute-1 nova_compute[162974]: 2025-10-09 10:07:05.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:05.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:05 compute-1 nova_compute[162974]: 2025-10-09 10:07:05.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:06.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:07.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:07 compute-1 nova_compute[162974]: 2025-10-09 10:07:07.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:08.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:09.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:07:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:07:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:07:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:07:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:07:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:07:10 compute-1 nova_compute[162974]: 2025-10-09 10:07:10.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:10.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:11.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:07:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3653895063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:07:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:07:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3653895063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:07:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:12.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:12 compute-1 nova_compute[162974]: 2025-10-09 10:07:12.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:13.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:13 compute-1 podman[182809]: 2025-10-09 10:07:13.527739021 +0000 UTC m=+0.037793405 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:07:13 compute-1 podman[182808]: 2025-10-09 10:07:13.554298535 +0000 UTC m=+0.065501635 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:07:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:14.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:15.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:15 compute-1 nova_compute[162974]: 2025-10-09 10:07:15.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:17.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:17 compute-1 nova_compute[162974]: 2025-10-09 10:07:17.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:18.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:20 compute-1 nova_compute[162974]: 2025-10-09 10:07:20.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:20.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:21.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:22.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:22 compute-1 nova_compute[162974]: 2025-10-09 10:07:22.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:23.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:23 compute-1 podman[182872]: 2025-10-09 10:07:23.537438451 +0000 UTC m=+0.050503218 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  9 10:07:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:24.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:25.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:25 compute-1 nova_compute[162974]: 2025-10-09 10:07:25.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:26.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:27.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:27 compute-1 nova_compute[162974]: 2025-10-09 10:07:27.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:28.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:29.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:30 compute-1 nova_compute[162974]: 2025-10-09 10:07:30.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:30.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:31.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:32.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:32 compute-1 nova_compute[162974]: 2025-10-09 10:07:32.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:33.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:33 compute-1 podman[182924]: 2025-10-09 10:07:33.937339538 +0000 UTC m=+0.041592718 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  9 10:07:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:34.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:35.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:35 compute-1 nova_compute[162974]: 2025-10-09 10:07:35.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:36.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:37 compute-1 nova_compute[162974]: 2025-10-09 10:07:37.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:38.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:39.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:40 compute-1 nova_compute[162974]: 2025-10-09 10:07:40.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:40.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:41.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:42.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:42 compute-1 nova_compute[162974]: 2025-10-09 10:07:42.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:43.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:44 compute-1 podman[182947]: 2025-10-09 10:07:44.536163107 +0000 UTC m=+0.039279658 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS)
Oct  9 10:07:44 compute-1 podman[182946]: 2025-10-09 10:07:44.536225113 +0000 UTC m=+0.038577514 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  9 10:07:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:44.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:45.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:45 compute-1 nova_compute[162974]: 2025-10-09 10:07:45.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:46.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:47.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:47 compute-1 nova_compute[162974]: 2025-10-09 10:07:47.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:48.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.952925) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468952989, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 660, "num_deletes": 251, "total_data_size": 1270763, "memory_usage": 1283312, "flush_reason": "Manual Compaction"}
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468956139, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 836219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30604, "largest_seqno": 31259, "table_properties": {"data_size": 832917, "index_size": 1210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7395, "raw_average_key_size": 19, "raw_value_size": 826465, "raw_average_value_size": 2124, "num_data_blocks": 55, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004422, "oldest_key_time": 1760004422, "file_creation_time": 1760004468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 3223 microseconds, and 2278 cpu microseconds.
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.956159) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 836219 bytes OK
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.956171) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.956864) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.956874) EVENT_LOG_v1 {"time_micros": 1760004468956871, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.956886) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1267152, prev total WAL file size 1267152, number of live WAL files 2.
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957193) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(816KB)], [57(16MB)]
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468957218, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 18460176, "oldest_snapshot_seqno": -1}
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6347 keys, 16355035 bytes, temperature: kUnknown
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468995380, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 16355035, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16310123, "index_size": 27970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 162813, "raw_average_key_size": 25, "raw_value_size": 16193121, "raw_average_value_size": 2551, "num_data_blocks": 1141, "num_entries": 6347, "num_filter_entries": 6347, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:48 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.995519) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 16355035 bytes
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.001625) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 483.2 rd, 428.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 16.8 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(41.6) write-amplify(19.6) OK, records in: 6858, records dropped: 511 output_compression: NoCompression
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.001638) EVENT_LOG_v1 {"time_micros": 1760004469001633, "job": 34, "event": "compaction_finished", "compaction_time_micros": 38204, "compaction_time_cpu_micros": 23187, "output_level": 6, "num_output_files": 1, "total_output_size": 16355035, "num_input_records": 6858, "num_output_records": 6347, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469001809, "job": 34, "event": "table_file_deletion", "file_number": 59}
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469003561, "job": 34, "event": "table_file_deletion", "file_number": 57}
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:07:49.003590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:49.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:50 compute-1 nova_compute[162974]: 2025-10-09 10:07:50.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:50.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:51.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:52.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:52 compute-1 nova_compute[162974]: 2025-10-09 10:07:52.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:53.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:54 compute-1 podman[183009]: 2025-10-09 10:07:54.02951601 +0000 UTC m=+0.061431852 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 10:07:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:54.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:55.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:55 compute-1 nova_compute[162974]: 2025-10-09 10:07:55.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:56.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:57 compute-1 nova_compute[162974]: 2025-10-09 10:07:57.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:58.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.115 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.138 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.139 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.139 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.139 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:07:59 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:07:59 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/273661975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.480 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:07:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:07:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:59.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.680 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.681 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4983MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.681 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.681 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.726 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.727 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:07:59 compute-1 nova_compute[162974]: 2025-10-09 10:07:59.738 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:00 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:08:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:08:00 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1270285005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.072 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.075 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.088 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.089 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.089 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:08:00 compute-1 nova_compute[162974]: 2025-10-09 10:08:00.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:00.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:01 compute-1 nova_compute[162974]: 2025-10-09 10:08:01.088 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:01 compute-1 nova_compute[162974]: 2025-10-09 10:08:01.089 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:01 compute-1 nova_compute[162974]: 2025-10-09 10:08:01.089 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:01 compute-1 nova_compute[162974]: 2025-10-09 10:08:01.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:01 compute-1 nova_compute[162974]: 2025-10-09 10:08:01.113 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:08:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:01.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:02 compute-1 nova_compute[162974]: 2025-10-09 10:08:02.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:02 compute-1 nova_compute[162974]: 2025-10-09 10:08:02.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:08:02 compute-1 nova_compute[162974]: 2025-10-09 10:08:02.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:08:02 compute-1 nova_compute[162974]: 2025-10-09 10:08:02.125 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:08:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:02.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:02 compute-1 nova_compute[162974]: 2025-10-09 10:08:02.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:03 compute-1 nova_compute[162974]: 2025-10-09 10:08:03.122 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:03.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:04 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:04 compute-1 podman[183186]: 2025-10-09 10:08:04.537202913 +0000 UTC m=+0.044867863 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  9 10:08:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:04.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:05.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:05 compute-1 nova_compute[162974]: 2025-10-09 10:08:05.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:06 compute-1 nova_compute[162974]: 2025-10-09 10:08:06.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:06.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:07.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:07 compute-1 nova_compute[162974]: 2025-10-09 10:08:07.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:08.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:09.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:10.045 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:08:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:10.046 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:08:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:10.046 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:08:10 compute-1 nova_compute[162974]: 2025-10-09 10:08:10.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:11.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:12.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:12 compute-1 nova_compute[162974]: 2025-10-09 10:08:12.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:13.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:14.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:15.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:15 compute-1 podman[183239]: 2025-10-09 10:08:15.527365412 +0000 UTC m=+0.037722221 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  9 10:08:15 compute-1 podman[183238]: 2025-10-09 10:08:15.556192321 +0000 UTC m=+0.068003162 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  9 10:08:15 compute-1 nova_compute[162974]: 2025-10-09 10:08:15.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:16.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:17.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:17 compute-1 nova_compute[162974]: 2025-10-09 10:08:17.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:18.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:19.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:20 compute-1 nova_compute[162974]: 2025-10-09 10:08:20.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:20.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:20 compute-1 nova_compute[162974]: 2025-10-09 10:08:20.972 2 DEBUG oslo_concurrency.processutils [None req-06752881-e4c7-4336-b1c1-bcd187f39813 3a4ac457589b496085910d92d06034e7 a53d5690b6a54109990182326650a2b8 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:08:20 compute-1 nova_compute[162974]: 2025-10-09 10:08:20.986 2 DEBUG oslo_concurrency.processutils [None req-06752881-e4c7-4336-b1c1-bcd187f39813 3a4ac457589b496085910d92d06034e7 a53d5690b6a54109990182326650a2b8 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:08:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:21.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:22.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:22 compute-1 nova_compute[162974]: 2025-10-09 10:08:22.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:23.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:24 compute-1 podman[183276]: 2025-10-09 10:08:24.546162912 +0000 UTC m=+0.058751728 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:08:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:24.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:25.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:25 compute-1 nova_compute[162974]: 2025-10-09 10:08:25.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:25 compute-1 nova_compute[162974]: 2025-10-09 10:08:25.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:25 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:25.715 71059 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:08:25 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:25.715 71059 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:08:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:26.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:27.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:27 compute-1 nova_compute[162974]: 2025-10-09 10:08:27.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:28.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:29.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:30 compute-1 nova_compute[162974]: 2025-10-09 10:08:30.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:30.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:31.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:32 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:08:32.718 71059 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1479fb1d-afaa-427a-bdce-40294d3573d2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:08:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:32.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:32 compute-1 nova_compute[162974]: 2025-10-09 10:08:32.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:33.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:34.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:35 compute-1 podman[183330]: 2025-10-09 10:08:35.528155361 +0000 UTC m=+0.037995266 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:08:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:35.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:35 compute-1 nova_compute[162974]: 2025-10-09 10:08:35.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:37.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:37 compute-1 nova_compute[162974]: 2025-10-09 10:08:37.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:38.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:39.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:40 compute-1 nova_compute[162974]: 2025-10-09 10:08:40.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:40.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:41.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:42.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:42 compute-1 nova_compute[162974]: 2025-10-09 10:08:42.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:43.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:44.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:45.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:45 compute-1 nova_compute[162974]: 2025-10-09 10:08:45.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:46 compute-1 podman[183352]: 2025-10-09 10:08:46.526260584 +0000 UTC m=+0.038154905 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  9 10:08:46 compute-1 podman[183353]: 2025-10-09 10:08:46.531093336 +0000 UTC m=+0.040312694 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Oct  9 10:08:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:46.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:47.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:47 compute-1 nova_compute[162974]: 2025-10-09 10:08:47.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:48.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:49.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:50 compute-1 nova_compute[162974]: 2025-10-09 10:08:50.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:50.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:51.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:52.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:52 compute-1 nova_compute[162974]: 2025-10-09 10:08:52.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:53.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:54.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:55 compute-1 podman[183416]: 2025-10-09 10:08:55.558448209 +0000 UTC m=+0.064277878 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct  9 10:08:55 compute-1 nova_compute[162974]: 2025-10-09 10:08:55.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:56.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:57 compute-1 nova_compute[162974]: 2025-10-09 10:08:57.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:58.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:08:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:59.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.110 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.124 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.137 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.138 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.138 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.138 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.138 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:09:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:09:00 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2013677285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.482 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.686 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.687 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4985MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.687 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.688 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.728 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.728 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:09:00 compute-1 nova_compute[162974]: 2025-10-09 10:09:00.746 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:09:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:00.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:09:01 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3812894481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:09:01 compute-1 nova_compute[162974]: 2025-10-09 10:09:01.084 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:09:01 compute-1 nova_compute[162974]: 2025-10-09 10:09:01.088 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:09:01 compute-1 nova_compute[162974]: 2025-10-09 10:09:01.099 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:09:01 compute-1 nova_compute[162974]: 2025-10-09 10:09:01.101 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:09:01 compute-1 nova_compute[162974]: 2025-10-09 10:09:01.101 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:09:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:02.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:02 compute-1 nova_compute[162974]: 2025-10-09 10:09:02.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.091 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.091 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.091 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.102 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.102 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.102 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.103 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:09:03 compute-1 nova_compute[162974]: 2025-10-09 10:09:03.121 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:03.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:04.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:05 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:09:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:05.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:05 compute-1 nova_compute[162974]: 2025-10-09 10:09:05.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:06 compute-1 nova_compute[162974]: 2025-10-09 10:09:06.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:06 compute-1 podman[183570]: 2025-10-09 10:09:06.529101678 +0000 UTC m=+0.037170290 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  9 10:09:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:06.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:07.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:07 compute-1 nova_compute[162974]: 2025-10-09 10:09:07.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:08.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:09 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:09.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:09:10.046 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:09:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:09:10.047 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:09:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:09:10.047 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:09:10 compute-1 nova_compute[162974]: 2025-10-09 10:09:10.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:10.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:11.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:12.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:12 compute-1 nova_compute[162974]: 2025-10-09 10:09:12.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:13.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:14.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:15.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:15 compute-1 nova_compute[162974]: 2025-10-09 10:09:15.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:16.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:17 compute-1 podman[183643]: 2025-10-09 10:09:17.527703528 +0000 UTC m=+0.039389855 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  9 10:09:17 compute-1 podman[183644]: 2025-10-09 10:09:17.528230692 +0000 UTC m=+0.038517610 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS)
Oct  9 10:09:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:17.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:17 compute-1 nova_compute[162974]: 2025-10-09 10:09:17.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:18.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:19.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:20 compute-1 nova_compute[162974]: 2025-10-09 10:09:20.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:20.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:21.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:22.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:22 compute-1 nova_compute[162974]: 2025-10-09 10:09:22.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:23.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:24.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:25.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:25 compute-1 nova_compute[162974]: 2025-10-09 10:09:25.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:26 compute-1 podman[183682]: 2025-10-09 10:09:26.547710197 +0000 UTC m=+0.058386480 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  9 10:09:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:26.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:27.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:27 compute-1 nova_compute[162974]: 2025-10-09 10:09:27.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:28.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:29.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:30 compute-1 nova_compute[162974]: 2025-10-09 10:09:30.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:30.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:31.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:32.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:32 compute-1 nova_compute[162974]: 2025-10-09 10:09:32.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:33.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:34.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:35.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:35 compute-1 nova_compute[162974]: 2025-10-09 10:09:35.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:36.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:37 compute-1 podman[183736]: 2025-10-09 10:09:37.524172797 +0000 UTC m=+0.036116882 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  9 10:09:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:37.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:37 compute-1 nova_compute[162974]: 2025-10-09 10:09:37.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:38.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:40 compute-1 nova_compute[162974]: 2025-10-09 10:09:40.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:40.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:41.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:42.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:42 compute-1 nova_compute[162974]: 2025-10-09 10:09:42.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:43.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:44.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:45.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:45 compute-1 nova_compute[162974]: 2025-10-09 10:09:45.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:46.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:47.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:47 compute-1 nova_compute[162974]: 2025-10-09 10:09:47.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:48 compute-1 podman[183759]: 2025-10-09 10:09:48.525371366 +0000 UTC m=+0.033113341 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 10:09:48 compute-1 podman[183760]: 2025-10-09 10:09:48.537422868 +0000 UTC m=+0.042583457 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:09:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:48.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:49.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:50 compute-1 nova_compute[162974]: 2025-10-09 10:09:50.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:50 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:50 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:50 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:50.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:51 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:51 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:51 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:51.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:52 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:52 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:52 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:52.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:52 compute-1 nova_compute[162974]: 2025-10-09 10:09:52.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:53 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:53 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:53 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:54 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:54 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:54 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:54.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:55 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:55 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:55 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:55.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:55 compute-1 nova_compute[162974]: 2025-10-09 10:09:55.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:55 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:56 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:56 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:56 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:56.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:57 compute-1 podman[183823]: 2025-10-09 10:09:57.540451708 +0000 UTC m=+0.053526277 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:09:57 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:57 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:57 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:57.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:57 compute-1 nova_compute[162974]: 2025-10-09 10:09:57.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:58 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:58 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:58 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:58.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:59 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:09:59 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:59 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:59.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.113 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.128 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.128 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.128 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.128 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.128 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:10:00 compute-1 ceph-mon[9795]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 10:10:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:10:00 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2459378375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.460 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.332s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.656 2 WARNING nova.virt.libvirt.driver [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.657 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4993MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.658 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.658 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.713 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.713 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:10:00 compute-1 nova_compute[162974]: 2025-10-09 10:10:00.724 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:10:00 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:00 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:00 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:00 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:00.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:01 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:10:01 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2459720255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:10:01 compute-1 nova_compute[162974]: 2025-10-09 10:10:01.062 2 DEBUG oslo_concurrency.processutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:10:01 compute-1 nova_compute[162974]: 2025-10-09 10:10:01.066 2 DEBUG nova.compute.provider_tree [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed in ProviderTree for provider: 79aa81b0-5a5d-4643-a355-ec5461cb321a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:10:01 compute-1 nova_compute[162974]: 2025-10-09 10:10:01.076 2 DEBUG nova.scheduler.client.report [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Inventory has not changed for provider 79aa81b0-5a5d-4643-a355-ec5461cb321a based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:10:01 compute-1 nova_compute[162974]: 2025-10-09 10:10:01.077 2 DEBUG nova.compute.resource_tracker [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:10:01 compute-1 nova_compute[162974]: 2025-10-09 10:10:01.078 2 DEBUG oslo_concurrency.lockutils [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:10:01 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:01 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:01 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:01.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.079 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.124 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.125 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:02 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:02 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:02 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:02.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:02 compute-1 nova_compute[162974]: 2025-10-09 10:10:02.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:03 compute-1 nova_compute[162974]: 2025-10-09 10:10:03.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:03 compute-1 nova_compute[162974]: 2025-10-09 10:10:03.114 2 DEBUG nova.compute.manager [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:10:03 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:03 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:03 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:03.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:04 compute-1 systemd[1]: Starting system activity accounting tool...
Oct  9 10:10:04 compute-1 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 10:10:04 compute-1 systemd[1]: Finished system activity accounting tool.
Oct  9 10:10:04 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:04 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:04 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:04.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:05 compute-1 nova_compute[162974]: 2025-10-09 10:10:05.109 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:10:05 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139326846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:10:05 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:05 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:05 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:05.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:05 compute-1 nova_compute[162974]: 2025-10-09 10:10:05.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:05 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:06 compute-1 nova_compute[162974]: 2025-10-09 10:10:06.114 2 DEBUG oslo_service.periodic_task [None req-bf06fa2c-ed9b-4398-9b07-308496df0785 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:06 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:06 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:06 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:06.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:07 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:07 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:07 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:07.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:07 compute-1 nova_compute[162974]: 2025-10-09 10:10:07.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:08 compute-1 podman[183896]: 2025-10-09 10:10:08.530622736 +0000 UTC m=+0.041730468 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid)
Oct  9 10:10:08 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:08 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:08 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:08.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:09 compute-1 podman[184019]: 2025-10-09 10:10:09.066088268 +0000 UTC m=+0.036887256 container exec cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:10:09 compute-1 podman[184019]: 2025-10-09 10:10:09.150975521 +0000 UTC m=+0.121774529 container exec_died cafaadfcff4fbda41fcfa94e93876b61b17048007232ab1b066948d6a6dac74a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct  9 10:10:09 compute-1 podman[184114]: 2025-10-09 10:10:09.447673613 +0000 UTC m=+0.038571533 container exec 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:10:09 compute-1 podman[184114]: 2025-10-09 10:10:09.455870506 +0000 UTC m=+0.046768427 container exec_died 3deba343924d32000597864cdf8400e1a37662cac2bb278add92b15d21a42d33 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:10:09 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:09 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:09 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:09.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:09 compute-1 podman[184225]: 2025-10-09 10:10:09.779228971 +0000 UTC m=+0.031428255 container exec 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 10:10:09 compute-1 podman[184225]: 2025-10-09 10:10:09.788824352 +0000 UTC m=+0.041023636 container exec_died 67720561c1a99e6e32e330a552a62a5ac4d38a51a1e32f841b990e11458d61b3 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-1-oqhtjo)
Oct  9 10:10:09 compute-1 podman[184277]: 2025-10-09 10:10:09.922728959 +0000 UTC m=+0.034901943 container exec 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., name=keepalived, vcs-type=git, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, release=1793, version=2.2.4, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived)
Oct  9 10:10:09 compute-1 podman[184277]: 2025-10-09 10:10:09.928040434 +0000 UTC m=+0.040213417 container exec_died 85d12475a64c9e2afed72ac4ba3c0ca1148840282b925163459f0ea258aea5a3 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-1-zabdum, com.redhat.component=keepalived-container, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=keepalived, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4)
Oct  9 10:10:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:10:10.048 71059 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:10:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:10:10.049 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:10:10 compute-1 ovn_metadata_agent[71054]: 2025-10-09 10:10:10.049 71059 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:10:10 compute-1 nova_compute[162974]: 2025-10-09 10:10:10.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:10 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:10 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:10 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:10 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:10.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.588737) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611589036, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1946, "num_deletes": 504, "total_data_size": 4265619, "memory_usage": 4338224, "flush_reason": "Manual Compaction"}
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611595310, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2787071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31264, "largest_seqno": 33205, "table_properties": {"data_size": 2779283, "index_size": 4154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 18929, "raw_average_key_size": 18, "raw_value_size": 2761688, "raw_average_value_size": 2764, "num_data_blocks": 179, "num_entries": 999, "num_filter_entries": 999, "num_deletions": 504, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004469, "oldest_key_time": 1760004469, "file_creation_time": 1760004611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 6600 microseconds, and 4725 cpu microseconds.
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.595350) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2787071 bytes OK
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.595366) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.596337) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.596346) EVENT_LOG_v1 {"time_micros": 1760004611596343, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.596358) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4255813, prev total WAL file size 4255813, number of live WAL files 2.
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.596977) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353038' seq:0, type:0; will stop at (end)
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2721KB)], [60(15MB)]
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611596999, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 19142106, "oldest_snapshot_seqno": -1}
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6319 keys, 13636523 bytes, temperature: kUnknown
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611625216, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 13636523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13595118, "index_size": 24527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 164408, "raw_average_key_size": 26, "raw_value_size": 13481647, "raw_average_value_size": 2133, "num_data_blocks": 975, "num_entries": 6319, "num_filter_entries": 6319, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002515, "oldest_key_time": 0, "file_creation_time": 1760004611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "94a5d839-0858-4e7b-94a4-0a54b15338db", "db_session_id": "M9CZJU0HKVV71NP1SGV8", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.625368) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 13636523 bytes
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.625724) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 677.1 rd, 482.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 15.6 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(11.8) write-amplify(4.9) OK, records in: 7346, records dropped: 1027 output_compression: NoCompression
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.625737) EVENT_LOG_v1 {"time_micros": 1760004611625731, "job": 36, "event": "compaction_finished", "compaction_time_micros": 28271, "compaction_time_cpu_micros": 22016, "output_level": 6, "num_output_files": 1, "total_output_size": 13636523, "num_input_records": 7346, "num_output_records": 6319, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611626248, "job": 36, "event": "table_file_deletion", "file_number": 62}
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611628432, "job": 36, "event": "table_file_deletion", "file_number": 60}
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.596942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.628505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.628509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.628510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.628511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-1 ceph-mon[9795]: rocksdb: (Original Log Time 2025/10/09-10:10:11.628512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:11 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:11 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:11.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:10:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298435994' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:10:11 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:10:11 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/298435994' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:12 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:10:12 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:12 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:12 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:12.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:12 compute-1 nova_compute[162974]: 2025-10-09 10:10:12.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:13 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:13 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:13 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:13.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:14 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:14 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:14 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:14.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:15 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:15 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:15 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:15.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:15 compute-1 nova_compute[162974]: 2025-10-09 10:10:15.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:15 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:16 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:16 compute-1 ceph-mon[9795]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:16 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:16 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:16 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:16.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:17 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:17 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:17 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:17 compute-1 nova_compute[162974]: 2025-10-09 10:10:17.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:18 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:18 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:18 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:18.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:19 compute-1 podman[184439]: 2025-10-09 10:10:19.569268886 +0000 UTC m=+0.059144961 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Oct  9 10:10:19 compute-1 podman[184438]: 2025-10-09 10:10:19.579277586 +0000 UTC m=+0.075224235 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:10:19 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:19 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:19 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:19.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:20 compute-1 nova_compute[162974]: 2025-10-09 10:10:20.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:20 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:20 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:20 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:20 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:20.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:21 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:21 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:21 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:21.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:22 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:22 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:22 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:22.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:22 compute-1 nova_compute[162974]: 2025-10-09 10:10:22.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:23 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:23 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:23 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:23.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:24 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:24 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:24 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:24.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:25 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:25 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:25 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:25.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:25 compute-1 nova_compute[162974]: 2025-10-09 10:10:25.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:25 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:26 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:26 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:26 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:26.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:27 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:27 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:27 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:27.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:27 compute-1 nova_compute[162974]: 2025-10-09 10:10:27.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:28 compute-1 podman[184478]: 2025-10-09 10:10:28.574656332 +0000 UTC m=+0.071693238 container health_status 36bf385830b85e085dc3ff9729118ef141f0f1a0fdc2513f7947ab6ab795421e (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 10:10:28 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:28 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:28 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:29 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:29 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:29 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:29.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:30 compute-1 nova_compute[162974]: 2025-10-09 10:10:30.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:30 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:30 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:30 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:30 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:30.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:31 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:31 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:31 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:31.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:32 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:32 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:32 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:32.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:32 compute-1 nova_compute[162974]: 2025-10-09 10:10:32.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:33 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:33 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:33 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:33.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:34 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:34 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:34 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:34.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:35 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:35 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:35 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:35.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:35 compute-1 nova_compute[162974]: 2025-10-09 10:10:35.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:35 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:36 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:36 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:36 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:36.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:37 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:37 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:37 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:37.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:37 compute-1 nova_compute[162974]: 2025-10-09 10:10:37.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:38 compute-1 systemd-logind[798]: New session 44 of user zuul.
Oct  9 10:10:38 compute-1 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 10:10:38 compute-1 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 10:10:38 compute-1 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 10:10:38 compute-1 systemd[1]: Starting User Manager for UID 1000...
Oct  9 10:10:38 compute-1 systemd[184537]: Queued start job for default target Main User Target.
Oct  9 10:10:38 compute-1 systemd[184537]: Created slice User Application Slice.
Oct  9 10:10:38 compute-1 systemd[184537]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:10:38 compute-1 systemd[184537]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 10:10:38 compute-1 systemd[184537]: Reached target Paths.
Oct  9 10:10:38 compute-1 systemd[184537]: Reached target Timers.
Oct  9 10:10:38 compute-1 systemd[184537]: Starting D-Bus User Message Bus Socket...
Oct  9 10:10:38 compute-1 systemd[184537]: Starting Create User's Volatile Files and Directories...
Oct  9 10:10:38 compute-1 systemd[184537]: Finished Create User's Volatile Files and Directories.
Oct  9 10:10:38 compute-1 systemd[184537]: Listening on D-Bus User Message Bus Socket.
Oct  9 10:10:38 compute-1 systemd[184537]: Reached target Sockets.
Oct  9 10:10:38 compute-1 systemd[184537]: Reached target Basic System.
Oct  9 10:10:38 compute-1 systemd[184537]: Reached target Main User Target.
Oct  9 10:10:38 compute-1 systemd[184537]: Startup finished in 124ms.
Oct  9 10:10:38 compute-1 systemd[1]: Started User Manager for UID 1000.
Oct  9 10:10:38 compute-1 systemd[1]: Started Session 44 of User zuul.
Oct  9 10:10:38 compute-1 podman[184587]: 2025-10-09 10:10:38.714578758 +0000 UTC m=+0.056345472 container health_status dd7a5da700b8ba72ceba21d69aecf00384a339e317089fce3f8212a1183599aa (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid)
Oct  9 10:10:38 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:38 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:38 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:38.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:39 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:39 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:39 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:39.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:40 compute-1 nova_compute[162974]: 2025-10-09 10:10:40.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:40 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:40 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:40 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:40 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:40.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:41 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 10:10:41 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3536492109' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 10:10:41 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:41 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:41 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:41.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:42 compute-1 nova_compute[162974]: 2025-10-09 10:10:42.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:42 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:42 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:42 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:43 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:43 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:43 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:43.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:43 compute-1 ovs-vsctl[184859]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  9 10:10:44 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  9 10:10:44 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  9 10:10:44 compute-1 virtqemud[162526]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  9 10:10:44 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: cache status {prefix=cache status} (starting...)
Oct  9 10:10:44 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:44 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:44 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:44 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:44.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:45 compute-1 lvm[185158]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:10:45 compute-1 lvm[185158]: VG ceph_vg0 finished
Oct  9 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: client ls {prefix=client ls} (starting...)
Oct  9 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:10:45 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2093065470' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:10:45 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:45 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:45 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:45.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: damage ls {prefix=damage ls} (starting...)
Oct  9 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:45 compute-1 nova_compute[162974]: 2025-10-09 10:10:45.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:10:45 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3532518201' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump loads {prefix=dump loads} (starting...)
Oct  9 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:45 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  9 10:10:45 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  9 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct  9 10:10:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3048919801' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  9 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  9 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  9 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  9 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  9 10:10:46 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:46 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct  9 10:10:46 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2239107020' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  9 10:10:46 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:46 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:46 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:46.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:47 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: ops {prefix=ops} (starting...)
Oct  9 10:10:47 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct  9 10:10:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2825927109' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  9 10:10:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:10:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3791647089' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:10:47 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: session ls {prefix=session ls} (starting...)
Oct  9 10:10:47 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn Can't run that command on an inactive MDS!
Oct  9 10:10:47 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:47 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:47 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:47.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:47 compute-1 ceph-mds[14063]: mds.cephfs.compute-1.svghvn asok_command: status {prefix=status} (starting...)
Oct  9 10:10:47 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:10:47 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1639166459' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:10:47 compute-1 nova_compute[162974]: 2025-10-09 10:10:47.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct  9 10:10:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4283214122' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  9 10:10:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:10:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/838622420' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:10:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct  9 10:10:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4281246886' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  9 10:10:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:10:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4117270880' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:10:48 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  9 10:10:48 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/45453420' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  9 10:10:48 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:48 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:48 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:48.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct  9 10:10:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/717780337' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  9 10:10:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  9 10:10:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/489993274' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  9 10:10:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct  9 10:10:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2208863461' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  9 10:10:49 compute-1 radosgw[13231]: ====== starting new request req=0x7ff2223805d0 =====
Oct  9 10:10:49 compute-1 radosgw[13231]: ====== req done req=0x7ff2223805d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:49 compute-1 radosgw[13231]: beast: 0x7ff2223805d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:49.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:10:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2519403560' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:10:49 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:10:49 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3875743926' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:10:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:10:50 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3632590972' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:10:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:10:50 compute-1 ceph-mon[9795]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/319661387' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:10:50 compute-1 podman[186089]: 2025-10-09 10:10:50.564006848 +0000 UTC m=+0.076462530 container health_status 5e1150c93314d5b38815fc8130e03b03cc91771cd442ce724d63cd33a7373f75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  9 10:10:50 compute-1 podman[186090]: 2025-10-09 10:10:50.584246857 +0000 UTC m=+0.089336792 container health_status a9976f6a2312783f72012e1ec9b07f14297aa62297711f47c0d257996be3427a (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:10:50 compute-1 nova_compute[162974]: 2025-10-09 10:10:50.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 22 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.437780 3 0.000259
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.437976 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=122) [1] r=-1 lpr=122 pi=[84,122)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000065 1 0.000114
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000049
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000024 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 123 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 4562944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [123,124], i have 124, src has [1,124]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.000506 4 0.000053
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.000616 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=84/85 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 4554752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920632 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=84/84 les/c/f=85/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.929512 5 0.000230
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000052 1 0.000056
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000620 1 0.000022
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.014229 2 0.000090
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 124 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.254469 1 0.000142
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.199120 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.199756 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.199779 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=123) [1]/[0] async=[1] r=0 lpr=123 pi=[84,123)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730270386s) [1] async=[1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 40'1059 active pruub 302.409027100s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] exit Reset 0.000089 1 0.000138
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Started
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Start
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 125 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125 pruub=15.730219841s) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 302.409027100s@ mbc={}] enter Started/Stray
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4521984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 125 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4521984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.861375 6 0.000071
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000940 2 0.000043
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 DELETING pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039841 2 0.000114
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.040842 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 126 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=123/124 n=5 ec=53/34 lis/c=123/84 les/c/f=124/85/0 sis=125) [1] r=-1 lpr=125 pi=[84,125)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.902274 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4513792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4513792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fc60f000/0x0/0x4ffc00000, data 0x154aa9/0x1fb000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 77.464063 170 0.000532
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 77.465815 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 78.470504 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 78.470528 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=70) [0] r=0 lpr=70 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538806915s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 active pruub 300.480133057s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] exit Reset 0.000078 1 0.000123
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Started
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Start
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 127 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127 pruub=10.538764954s) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 300.480133057s@ mbc={}] enter Started/Stray
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4505600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929125 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.770187 3 0.000226
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.770216 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=-1 lpr=127 pi=[70,127)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000058 1 0.000081
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000035
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000020 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 128 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4497408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 128 handle_osd_map epochs [128,129], i have 129, src has [1,129]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002967 4 0.000048
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003062 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=70/71 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.287339211s of 10.341490746s, submitted: 63
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f(unlocked)] enter Initial
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=0 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000038 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=0 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000024
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000106 1 0.000033
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000143 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.917700 5 0.000274
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000080 1 0.000041
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000345 1 0.000023
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035394 2 0.000101
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4497408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 129 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.100369 2 0.000045
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.100600 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.064024 1 0.000046
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.017850 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.020940 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.020967 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] async=[2] r=0 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.100759 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=0 lpr=129 pi=[97,129)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.899759293s) [2] async=[2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 40'1059 active pruub 308.632507324s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000458 1 0.000730
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000102 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] exit Reset 0.004411 1 0.004537
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Started
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Start
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] exit Start 0.000009 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130 pruub=15.895454407s) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 308.632507324s@ mbc={}] enter Started/Stray
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc607000/0x0/0x4ffc00000, data 0x15ac76/0x204000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 4489216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fc603000/0x0/0x4ffc00000, data 0x15cc5f/0x207000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 130 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:10:50 compute-1 ceph-osd[7514]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.207355 5 0.000520
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.203806 6 0.000178
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001910 2 0.000149
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003054 4 0.000130
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000064 1 0.000036
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 DELETING pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.042146 2 0.000241
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.044125 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=-1 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.248037 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.070374 1 0.000064
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.466371 1 0.000036
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.539968 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 1.747725 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] r=-1 lpr=130 pi=[97,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000078 1 0.000113
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000469 2 0.000032
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: merge_log_dups log.dups.size()=0olog.dups.size()=32
Oct  9 10:10:50 compute-1 ceph-osd[7514]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=32
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001068 2 0.000055
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 132 ms_handle_reset con 0x560c9c8a1800 session 0x560c9d630d20
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 132 handle_osd_map epochs [132,133], i have 133, src has [1,133]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002487 2 0.000117
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004414 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002016 4 0.001053
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=132/133 n=5 ec=53/34 lis/c=132/97 les/c/f=133/98/0 sis=132) [0] r=0 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 952396 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct  9 10:10:50 compute-1 ceph-osd[7514]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4415488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953544 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.116673470s of 13.153404236s, submitted: 45
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 953676 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 4374528 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4472832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fa000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4472832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4464640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954348 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a1000 session 0x560c9d2dc5a0
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9b7d1800 session 0x560c9d8512c0
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4464640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9cbe2c00 session 0x560c9c5994a0
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9cf88000 session 0x560c9d20c960
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4456448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4456448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4448256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4448256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954348 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4440064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4423680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.990660667s of 14.992744446s, submitted: 2
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4415488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954216 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4407296 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4399104 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954480 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4390912 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4382720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4366336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4366336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955992 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4358144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4358144 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9a066000 session 0x560c9d208d20
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4349952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.551061630s of 13.556247711s, submitted: 4
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4349952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955401 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4341760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4333568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4333568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4325376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 4317184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 4308992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4300800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4292608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4292608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4284416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 4276224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 4276224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 4268032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 4268032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4259840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.443452835s of 26.446563721s, submitted: 3
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4243456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 4235264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 4235264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 4227072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 4227072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4218880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4210688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4202496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4194304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 4186112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 4186112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4177920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4169728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4169728 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4161536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4161536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4153344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4145152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4136960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4136960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9dab7400 session 0x560c9d20f860
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a0800 session 0x560c9cf7a960
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4128768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4120576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4120576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4112384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4112384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4104192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4096000 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955005 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 50.453056335s of 50.454822540s, submitted: 1
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4096000 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4087808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4087808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955137 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 4079616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 4071424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 4071424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 4063232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 4063232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 4055040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 4046848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 4046848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 4038656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 4038656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.000545502s of 17.003219604s, submitted: 2
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 4030464 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 4022272 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 4014080 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 4014080 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 4005888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 4005888 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3997696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3997696 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3989504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3989504 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 3981312 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3964928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3964928 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3956736 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3948544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3948544 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3940352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a3000 session 0x560c9d2dd0e0
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9c8a0000 session 0x560c9d20fa40
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3940352 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3932160 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3923968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3923968 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3915776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3915776 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.123451233s of 34.124847412s, submitted: 1
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3907584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3907584 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3899392 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 3891200 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 3883008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 3883008 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 3874816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 3874816 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3866624 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3858432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956649 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3858432 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.923893929s of 14.924749374s, submitted: 1
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3850240 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3842048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3842048 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3833856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3833856 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3825664 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 3817472 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 3809280 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3801088 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3792896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3792896 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 3784704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 3784704 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3776512 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3768320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3768320 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3760128 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3751936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3751936 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3743744 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3743744 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 3735552 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 3727360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 3727360 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 3710976 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 3702784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 3702784 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 3694592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 3694592 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 3686400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 3686400 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 3678208 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 3670016 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 3661824 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 3653632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 3653632 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 3637248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 3637248 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 3629056 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 3629056 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 3620864 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 3612672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 3612672 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 3604480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 3604480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 3596288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 3588096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3579904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 3571712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 3571712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3563520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3563520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3555328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3555328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3538944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 3530752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 3530752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3522560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3522560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 3514368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3506176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3506176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3497984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 3489792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 3489792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3481600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3465216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3465216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3457024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3448832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3440640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 3432448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3424256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3416064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3416064 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3407872 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 3399680 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3391488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3391488 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8413 writes, 33K keys, 8413 commit groups, 1.0 writes per commit group, ingest: 21.18 MB, 0.04 MB/s#012Interval WAL: 8413 writes, 1875 syncs, 4.49 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560c99273350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 3325952 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3317760 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3309568 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 3301376 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3293184 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3284992 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3276800 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 3268608 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3260416 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 3252224 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3244032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3244032 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3235840 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3227648 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3219456 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 3211264 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3203072 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 3194880 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3186688 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3178496 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82771968 unmapped: 3170304 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3162112 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 3153920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82788352 unmapped: 3153920 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3137536 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3129344 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82821120 unmapped: 3121152 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3112960 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3104768 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82845696 unmapped: 3096576 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3088384 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 3080192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 3080192 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 3063808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 3063808 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82886656 unmapped: 3055616 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82894848 unmapped: 3047424 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82903040 unmapped: 3039232 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82911232 unmapped: 3031040 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 3022848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82919424 unmapped: 3022848 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 82927616 unmapped: 3014656 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 206.934524536s of 206.935745239s, submitted: 1
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [1])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83247104 unmapped: 2695168 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-mon[9795]: mon.compute-1@2(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2580480 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 2572288 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 2564096 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2555904 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2547712 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 2539520 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 2531328 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 2523136 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83427328 unmapped: 2514944 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 2506752 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 2498560 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83451904 unmapped: 2490368 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 2482176 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [1])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 2473984 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 ms_handle_reset con 0x560c9ade0c00 session 0x560c9b978780
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 2465792 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 2457600 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 2449408 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 2441216 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 2433024 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 2424832 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 2416640 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fc5fb000/0x0/0x4ffc00000, data 0x162be1/0x211000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956517 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 2408448 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 2400256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 496.980529785s of 497.169708252s, submitted: 379
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc5f7000/0x0/0x4ffc00000, data 0x164ccd/0x214000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 83542016 unmapped: 2400256 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 134 heartbeat osd_stat(store_statfs(0x4fc5f7000/0x0/0x4ffc00000, data 0x164ccd/0x214000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1310720 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968373 data_alloc: 218103808 data_used: 282624
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 136 ms_handle_reset con 0x560c9c8a3000 session 0x560c9b4752c0
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1294336 heap: 85942272 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 137 ms_handle_reset con 0x560c9dab7400 session 0x560c9c598780
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 16670720 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 16621568 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86106112 unmapped: 16621568 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fb179000/0x0/0x4ffc00000, data 0x15db086/0x1690000, compress 0x0/0x0/0x0, omap 0x63f, meta 0x33ef9c1), peers [1,2] op hist [])
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:50 compute-1 ceph-osd[7514]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:50 compute-1 ceph-osd[7514]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 16588800 heap: 102727680 old mem: 2845415833 new mem: 2845415833
Oct  9 10:10:50 compute-1 ceph-osd[7514]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114638 data_alloc: 218103808 data_used: 282624
